WorldWideScience

Sample records for monocular 3d vision

  1. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  2. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  3. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  4. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  5. Relating binocular and monocular vision in strabismic and anisometropic amblyopia.

    Science.gov (United States)

    Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D

    2006-06-01

    To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.

  6. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  7. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  8. 3-D Vision Techniques for Autonomous Vehicles

    Science.gov (United States)

    1988-08-01

    TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh

  9. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  10. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  11. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  12. A flexible approach to light pen calibration for a monocular-vision-based coordinate measuring system

    International Nuclear Information System (INIS)

    Fu, Shuai; Zhang, Liyan; Ye, Nan; Liu, Shenglan; Zhang, WeiZhong

    2014-01-01

    A monocular-vision-based coordinate measuring system (MVB-CMS) obtains the 3D coordinates of the probe tip center of a light pen by analyzing the monocular image of the target points on the light pen. The light pen calibration, including the target point calibration and the probe tip center calibration, is critical to guarantee the accuracy of the MVB-CMS. The currently used method resorts to special equipment to calibrate the feature points on the light pen in a separate offsite procedure and uses the system camera to calibrate the probe tip center onsite. Instead, a complete onsite light pen calibration method is proposed in this paper. It needs only several auxiliary target points with the same visual features of the light pen targets and two or more cone holes with known distance(s). The target point calibration and the probe tip center calibration are jointly implemented by simply taking two groups of images of the light pen with the camera of the system. The proposed method requires no extra equipment other than the system camera for the calibration, so it is easier to implement and flexible for use. It has been incorporated in a large field-of-view MVB-CMS, which uses active luminous infrared LEDs as the target points. Experimental results demonstrate the accuracy and effectiveness of the proposed method. (paper)

  13. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-08-01

    Full Text Available The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. The model describes how monocular and binocular oriented filtering interacts with later stages of 3D boundary formation and surface filling-in in the lateral geniculate nucleus (LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes enables computationally complementary boundary and surface formation properties to generate a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity

  14. The New Realm of 3-D Vision

    Science.gov (United States)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  15. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  16. Evaluation of vision training using 3D play game

    Science.gov (United States)

    Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun

    2015-03-01

    The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.

  17. Fractal tomography and its application in 3D vision

    Science.gov (United States)

    Trubochkina, N.

    2018-01-01

    A three-dimensional artistic fractal tomography method that implements a non-glasses 3D visualization of fractal worlds in layered media is proposed. It is designed for the glasses-free 3D vision of digital art objects and films containing fractal content. Prospects for the development of this method in art galleries and the film industry are considered.

  18. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  19. 3D gaze tracking system for NVidia 3D Vision®.

    Science.gov (United States)

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  20. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  1. Enhanced operator perception through 3D vision and haptic feedback

    Science.gov (United States)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  2. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  3. [Acute monocular loss of vision : Differential diagnostic considerations apart from the internistic etiological clarification].

    Science.gov (United States)

    Rickmann, A; Macek, M A; Szurman, P; Boden, K

    2017-08-03

    We report the case of acute painless monocular loss of vision in a 53-year-old man. An interdisciplinary etiological evaluation remained without pathological findings with respect to arterial branch occlusion. A reevaluation of the patient history led to a possible association with the administration of phosphodiesterase type 5 inhibitor (PDE5 inhibitor). A critical review of the literature on PDE5 inhibitor administration with ocular participation was performed.

  4. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision.

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon

  5. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  6. Effects of lens distortion calibration patterns on the accuracy of monocular 3D measurements

    CSIR Research Space (South Africa)

    De Villiers, J

    2011-11-01

    Full Text Available choice (e.g. the open computer vision (OpenCV) library [4], Caltech Camera Calibration Toolbox [5]) as the intersections can be found extremely accurately by finding the saddle point of the intensity profile about the intersection as described... to capture and process data in order to calibrate it. A. Equipment specification A 1600-by-1200 Prosilica GE1660 Gigabit Ethernet ma- chine vision camera was mated with a Schneider Cinegon 4.8mm/f1.4 lens for use in this work. This lens has an 82...

  7. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    In this thesis 3D reconstruction was investigated for application in precision agriculture where previous work focused on low resolution index maps where each pixel represents an area in the field and the index represents an overall crop status in that area. 3D reconstructions of plants would allow...... reconstruction in occluded areas. The trinocular setup was used for both window correlation based and energy minimization based algorithms. A novel adaption of symmetric multiple windows algorithm with trinocular vision was developed. The results were promising and allowed for better disparity estimations...... on steep sloped surfaces. Also, a novel adaption of a well known graph cut based disparity estimation algorithm with trinocular vision was developed and tested. The results were successful and allowed for better disparity estimations on steep sloped surfaces. After finding the disparity maps each...

  8. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  9. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  10. Control monocular 3D dinámico basado en imagen

    Directory of Open Access Journals (Sweden)

    Luis Hernández Santana

    2011-09-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} En este trabajo se presenta un sistema de control servovisual para regulación de posición de un robot manipulador con cámara en mano que se mueve en el espacio cartesiano 3D. El objetivo es control el robot de tal forma que la imagen de una esfera en movimiento se mantenga en el centro del plano imagen con radio constante. Se propone una estrategia de control con dos lazos en cascada, el lazo interno resuelve el control articular y el lazo externo implementa el control con realimentación visual. El robot y el sistema de visión son modelados para pequeñas variaciones alrededor del punto de operación para control de posición. Para estas condiciones se muestran la estabilidad del sistema y la respuesta en estado estable para trayectorias del objeto. Para ilustrar las prestaciones del sistema, se presentan los resultados experimentales para un manipulador ASEAIRB6.

  11. Fiber optic coherent laser radar 3d vision system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-01-01

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  12. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  13. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  14. Panoramic 3d Vision on the ExoMars Rover

    Science.gov (United States)

    Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.

    The Pasteur payload on the ESA ExoMars Rover 2011/2013 is designed to search for evidence of extant or extinct life either on or up to ˜2 m below the surface of Mars. The rover will be equipped by a panoramic imaging system to be developed by a UK, German, Austrian, Swiss, Italian and French team for visual characterization of the rover's surroundings and (in conjunction with an infrared imaging spectrometer) remote detection of potential sample sites. The Panoramic Camera system consists of a wide angle multispectral stereo pair with 65° field-of-view (WAC; 1.1 mrad/pixel) and a high resolution monoscopic camera (HRC; current design having 59.7 µrad/pixel with 3.5° field-of-view) . Its scientific goals and operational requirements can be summarized as follows: • Determination of objects to be investigated in situ by other instruments for operations planning • Backup and Support for the rover visual navigation system (path planning, determination of subsequent rover positions and orientation/tilt within the 3d environment), and localization of the landing site (by stellar navigation or by combination of orbiter and ground panoramic images) • Geological characterization (using narrow band geology filters) and cartography of the local environments (local Digital Terrain Model or DTM). • Study of atmospheric properties and variable phenomena near the Martian surface (e.g. aerosol opacity, water vapour column density, clouds, dust devils, meteors, surface frosts,) 1 • Geodetic studies (observations of Sun, bright stars, Phobos/Deimos). The performance of 3d data processing is a key element of mission planning and scientific data analysis. The 3d Vision Team within the Panoramic Camera development Consortium reports on the current status of development, consisting of the following items: • Hardware Layout & Engineering: The geometric setup of the system (location on the mast & viewing angles, mutual mounting between WAC and HRC) needs to be optimized w

  15. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke

    2013-12-01

    To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Interocular acuity differences and binocular summation ratios were compared between groups. Crowding ratios were calculated by dividing the single Landolt C decimal acuity with the crowded Landolt C decimal acuity mono- and binocularly. A linear regression analysis was conducted to investigate the contribution of 5 predictors to the monocular and binocular crowding ratio: nystagmus amplitude, nystagmus frequency, strabismus, astigmatism, and anisometropia. Crowding ratios were higher under mono- and binocular viewing conditions for children with infantile nystagmus syndrome than for children with normal vision. Children with albinism showed higher crowding ratios in their poorer eye and under binocular viewing conditions than children with normal vision. Children with albinism and children with infantile nystagmus syndrome showed larger interocular acuity differences than children with normal vision (0.1 logMAR in our clinical groups and 0.0 logMAR in children with normal vision). Binocular summation ratios did not differ between groups. Strabismus and nystagmus amplitude predicted the crowding ratio in the poorer eye (p = 0.015 and p = 0.005, respectively). The crowding ratio in the better eye showed a marginally significant relation with nystagmus frequency and depth of anisometropia (p = 0.082 and p = 0.070, respectively). The binocular crowding ratio was not predicted by any of the variables. Children with albinism and children with infantile nystagmus syndrome show larger interocular acuity differences than children with normal vision. Strabismus and nystagmus amplitude are significant predictors of the crowding ratio in the poorer eye.

  16. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  17. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  18. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  19. Weight prediction of broiler chickens using 3D computer vision

    DEFF Research Database (Denmark)

    Mortensen, Anders Krogh; Lisouski, Pavel; Ahrendt, Peter

    2016-01-01

    a platform weigher which may also include ill birds. In the current study, a fully-automatic 3D camera-based weighing system for broilers have been developed and evaluated in a commercial production environment. Specifically, a low-cost 3D camera (Kinect) that directly returned a depth image was employed...

  20. An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    OpenAIRE

    Farias, Karoline de M.; Rodrigues Junior, WIlson Leal; Bezerra Neto, Ranulfo P.; Rabelo, Ricardo A. L.; Santana, Andre M.

    2017-01-01

    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the advers...

  1. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  2. Inverse problems in vision and 3D tomography

    CERN Document Server

    Mohamad-Djafari, Ali

    2013-01-01

    The concept of an inverse problem is a familiar one to most scientists and engineers, particularly in the field of signal and image processing, imaging systems (medical, geophysical, industrial non-destructive testing, etc.) and computer vision. In imaging systems, the aim is not just to estimate unobserved images, but also their geometric characteristics from observed quantities that are linked to these unobserved quantities through the forward problem. This book focuses on imagery and vision problems that can be clearly written in terms of an inverse problem where an estimate for the image a

  3. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk

    2014-01-01

    We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identification...

  4. Vision based error detection for 3D printing processes

    Directory of Open Access Journals (Sweden)

    Baumann Felix

    2016-01-01

    Full Text Available 3D printers became more popular in the last decade, partly because of the expiration of key patents and the supply of affordable machines. The origin is located in rapid prototyping. With Additive Manufacturing (AM it is possible to create physical objects from 3D model data by layer wise addition of material. Besides professional use for prototyping and low volume manufacturing they are becoming widespread amongst end users starting with the so called Maker Movement. The most prevalent type of consumer grade 3D printers is Fused Deposition Modelling (FDM, also Fused Filament Fabrication FFF. This work focuses on FDM machinery because of their widespread occurrence and large number of open problems like precision and failure. These 3D printers can fail to print objects at a statistical rate depending on the manufacturer and model of the printer. Failures can occur due to misalignment of the print-bed, the print-head, slippage of the motors, warping of the printed material, lack of adhesion or other reasons. The goal of this research is to provide an environment in which these failures can be detected automatically. Direct supervision is inhibited by the recommended placement of FDM printers in separate rooms away from the user due to ventilation issues. The inability to oversee the printing process leads to late or omitted detection of failures. Rejects effect material waste and wasted time thus lowering the utilization of printing resources. Our approach consists of a camera based error detection mechanism that provides a web based interface for remote supervision and early failure detection. Early failure detection can lead to reduced time spent on broken prints, less material wasted and in some cases salvaged objects.

  5. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision

    OpenAIRE

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision ...

  6. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    OpenAIRE

    Stephen eGrossberg

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in s...

  7. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    International Nuclear Information System (INIS)

    Ilyas, Ismet P

    2013-01-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  8. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  9. Impact of 3D vision on mental workload and laparoscopic performance in inexperienced subjects.

    Science.gov (United States)

    Gómez-Gómez, E; Carrasco-Valiente, J; Valero-Rosa, J; Campos-Hernández, J P; Anglada-Curado, F J; Carazo-Carazo, J L; Font-Ugalde, P; Requena-Tapia, M J

    2015-05-01

    To assess the effect of vision in three dimensions (3D) versus two dimensions (2D) on mental workload and laparoscopic performance during simulation-based training. A prospective, randomized crossover study on inexperienced students in operative laparoscopy was conducted. Forty-six candidates executed five standardized exercises on a pelvitrainer with both vision systems (3D and 2D). Laparoscopy performance was assessed using the total time (in seconds) and the number of failed attempts. For workload assessment, the validated NASA-TLX questionnaire was administered. 3D vision improves the performance reducing the time (3D = 1006.08 ± 315.94 vs. 2D = 1309.17 ± 300.28; P NASA-TLX results, less mental workload is experienced with the use of 3D (P < .001). However, 3D vision was associated with greater visual impairment (P < .01) and headaches (P < .05). The incorporation of 3D systems in laparoscopic training programs would facilitate the acquisition of laparoscopic skills, because they reduce mental workload and improve the performance on inexperienced surgeons. However, some undesirable effects such as visual discomfort or headache are identified initially. Copyright © 2014 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Cross-orientation masking in human color vision: application of a two-stage model to assess dichoptic and monocular sources of suppression.

    Science.gov (United States)

    Kim, Yeon Jin; Gheiratmand, Mina; Mullen, Kathy T

    2013-05-28

    Cross-orientation masking (XOM) occurs when the detection of a test grating is masked by a superimposed grating at an orthogonal orientation, and is thought to reveal the suppressive effects mediating contrast normalization. Medina and Mullen (2009) reported that XOM was greater for chromatic than achromatic stimuli at equivalent spatial and temporal frequencies. Here we address whether the greater suppression found in binocular color vision originates from a monocular or interocular site, or both. We measure monocular and dichoptic masking functions for red-green color contrast and achromatic contrast at three different spatial frequencies (0.375, 0.75, and 1.5 cpd, 2 Hz). We fit these functions with a modified two-stage masking model (Meese & Baker, 2009) to extract the monocular and interocular weights of suppression. We find that the weight of monocular suppression is significantly higher for color than achromatic contrast, whereas dichoptic suppression is similar for both. These effects are invariant across spatial frequency. We then apply the model to the binocular masking data using the measured values of the monocular and interocular sources of suppression and show that these are sufficient to account for color binocular masking. We conclude that the greater strength of chromatic XOM has a monocular origin that transfers through to the binocular site.

  11. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  12. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  13. 3D vision accelerates laparoscopic proficiency and skills are transferable to 2D conditions

    DEFF Research Database (Denmark)

    Sørensen, Stine Maya Dreier; Konge, Lars; Bjerrum, Flemming

    2017-01-01

    : Mean training time were reduced in the intervention group; 231 min versus 323 min; P = 0.012. There was no significant difference in the mean times to completion of the retention test; 92 min versus 95 min; P = 0.85. CONCLUSION: 3D vision reduced time to proficiency on a virtual-reality laparoscopy...

  14. A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP

    Science.gov (United States)

    Gao, Zhanyu; Gu, Yingying; Lv, Yaoyu; Xu, Zhenbang; Wu, Qingwen

    2018-06-01

    A monocular vision-based pose measurement system is provided for real-time measurement of a three-degree-of-freedom (3-DOF) air-bearing test-bed. Firstly, a circular plane cooperative target is designed. An image of a target fixed on the test-bed is then acquired. Blob analysis-based image processing is used to detect the object circles on the target. A fast algorithm (FCCSP) based on pixel statistics is proposed to extract the centers of object circles. Finally, pose measurements can be obtained when combined with the centers and the coordinate transformation relation. Experiments show that the proposed method is fast, accurate, and robust enough to satisfy the requirement of the pose measurement.

  15. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  16. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding.

    Science.gov (United States)

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-05-11

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components.

  17. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  18. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  19. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    Science.gov (United States)

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  20. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  1. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  2. Monocular and binocular development in children with albinism, infantile nystagmus syndrome and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity differences and

  3. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Abstract Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity

  4. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  5. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  6. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  7. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  8. 3-D vision and figure-ground separation by visual cortex.

    Science.gov (United States)

    Grossberg, S

    1994-01-01

    A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with

  9. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    Science.gov (United States)

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  10. Acquisition And Processing Of Range Data Using A Laser Scanner-Based 3-D Vision System

    Science.gov (United States)

    Moring, I.; Ailisto, H.; Heikkinen, T.; Kilpela, A.; Myllyla, R.; Pietikainen, M.

    1988-02-01

    In our paper we describe a 3-D vision system designed and constructed at the Technical Research Centre of Finland in co-operation with the University of Oulu. The main application fields our 3-D vision system was developed for are geometric measurements of large objects and manipulator and robot control tasks. It seems to be potential in automatic vehicle guidance applications, too. The system has now been operative for about one year and its performance has been extensively tested. Recently we have started a field test phase to evaluate its performance in real industrial tasks and environments. The system consists of three main units: the range finder, the scanner and the computer. The range finder is based on the direct measurement of the time-of-flight of a laser pulse. The time-interval between the transmitted and the received light pulses is converted into a continuous analog voltage, which is amplified, filtered and offset-corrected to produce the range information. The scanner consists of two mirrors driven by moving iron galvanometers. This system is controlled by servo amplifiers. The computer unit controls the scanner, transforms the measured coordinates into a cartesian coordinate system and serves as a user interface and postprocessing environment. Methods for segmenting the range image into a higher level description have been developed. The description consists of planar and curved surfaces and their features and relations. Parametric surface representations based on the Ferguson surface patch are studied, too.

  11. A 3D vision approach for correction of patient pose in radiotherapy

    International Nuclear Information System (INIS)

    Chyou, T.; Meyer, J.

    2011-01-01

    Full text: To develop an approach to quantitatively determine patient surface contours as a pan of an augmented reality system for patient position and posture correction in radiotherapy. The approach is based on a 3D vision method referred to as active stereo with structured light. When a 3D object is viewed with a standard digital camera the depth information along one dimension, the axis parallel to the line of sight, is lost. With the aid of a projected structured light codification pattern, 3D coordinates of the scene can be recovered from a 2D image. Two codification strategies were examined. The spatial encoding method requires a single static pattern, thus enabling dynamic scenes to be captured. Temporal encoding methods require a set of patterns to be successively projected onto the object (see Fig. I), the encoding for each pixel is only complete when the entire series of patterns has been projected. Both methods are investigated in terms of the tradeoffs with regard to convenience, accuracy and acquisition time. The temporal method has shown high sensitivity to surface features on a human phantom even under typical office light conditions. The preliminary accuracy was in the order of millimeters at a distance of I m. The spatial encoding approach is ongoing. The most suitable approach will be integrated into the existing augmented reality system to provide a virtual surface contour of the desired patient position for visual guidance, and quantitative information of offsets between the measured and desired position.

  12. Fractographic classification in metallic materials by using 3D processing and computer vision techniques

    Directory of Open Access Journals (Sweden)

    Maria Ximena Bastidas-Rodríguez

    2016-09-01

    Full Text Available Failure analysis aims at collecting information about how and why a failure is produced. The first step in this process is a visual inspection on the flaw surface that will reveal the features, marks, and texture, which characterize each type of fracture. This is generally carried out by personnel with no experience that usually lack the knowledge to do it. This paper proposes a classification method for three kinds of fractures in crystalline materials: brittle, fatigue, and ductile. The method uses 3D vision, and it is expected to support failure analysis. The features used in this work were: i Haralick’s features and ii the fractal dimension. These features were applied to 3D images obtained from a confocal laser scanning microscopy Zeiss LSM 700. For the classification, we evaluated two classifiers: Artificial Neural Networks and Support Vector Machine. The performance evaluation was made by extracting four marginal relations from the confusion matrix: accuracy, sensitivity, specificity, and precision, plus three evaluation methods: Receiver Operating Characteristic space, the Individual Classification Success Index, and the Jaccard’s coefficient. Despite the classification percentage obtained by an expert is better than the one obtained with the algorithm, the algorithm achieves a classification percentage near or exceeding the 60 % accuracy for the analyzed failure modes. The results presented here provide a good approach to address future research on texture analysis using 3D data.

  13. 3D Vision Provides Shorter Operative Time and More Accurate Intraoperative Surgical Performance in Laparoscopic Hiatal Hernia Repair Compared With 2D Vision.

    Science.gov (United States)

    Leon, Piera; Rivellini, Roberta; Giudici, Fabiola; Sciuto, Antonio; Pirozzi, Felice; Corcione, Francesco

    2017-04-01

    The aim of this study is to evaluate if 3-dimensional high-definition (3D) vision in laparoscopy can prompt advantages over conventional 2D high-definition vision in hiatal hernia (HH) repair. Between September 2012 and September 2015, we randomized 36 patients affected by symptomatic HH to undergo surgery; 17 patients underwent 2D laparoscopic HH repair, whereas 19 patients underwent the same operation in 3D vision. No conversion to open surgery occurred. Overall operative time was significantly reduced in the 3D laparoscopic group compared with the 2D one (69.9 vs 90.1 minutes, P = .006). Operative time to perform laparoscopic crura closure did not differ significantly between the 2 groups. We observed a tendency to a faster crura closure in the 3D group in the subgroup of patients with mesh positioning (7.5 vs 8.9 minutes, P = .09). Nissen fundoplication was faster in the 3D group without mesh positioning ( P = .07). 3D vision in laparoscopic HH repair helps surgeon's visualization and seems to lead to operative time reduction. Advantages can result from the enhanced spatial perception of narrow spaces. Less operative time and more accurate surgery translate to benefit for patients and cost savings, compensating the high costs of the 3D technology. However, more data from larger series are needed to firmly assess the advantages of 3D over 2D vision in laparoscopic HH repair.

  14. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Michiel Vlaminck

    2016-11-01

    Full Text Available In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.

  15. Position estimation and driving of an autonomous vehicle by monocular vision

    Science.gov (United States)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  16. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  17. Perception of 3-D location based on vision, touch, and extended touch.

    Science.gov (United States)

    Giudice, Nicholas A; Klatzky, Roberta L; Bennett, Christopher R; Loomis, Jack M

    2013-01-01

    Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.

  18. 3D Vision Based Landing Control of a Small Scale Autonomous Helicopter

    Directory of Open Access Journals (Sweden)

    Zhenyu Yu

    2007-03-01

    Full Text Available Autonomous landing is a challenging but important task for Unmanned Aerial Vehicles (UAV to achieve high level of autonomy. The fundamental requirement for landing is the knowledge of the height above the ground, and a properly designed controller to govern the process. This paper presents our research results in the study of landing an autonomous helicopter. The above-the-ground height sensing is based on a 3D vision system. We have designed a simple plane-fitting method for estimating the height over the ground. The method enables vibration free measurement with the camera rigidly attached on the helicopter without using complicated gimbal or active vision mechanism. The estimated height is used by the landing control loop. Considering the ground effect during landing, we have proposed a two-stage landing procedure. Two controllers are designed for the two landing stages respectively. The sensing approach and control strategy has been verified in field flight test and has demonstrated satisfactory performance.

  19. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  20. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Science.gov (United States)

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  1. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  2. UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy

    Science.gov (United States)

    Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana

    2017-08-01

    On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.

  3. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  4. Comparison of 2D and 3D Vision Gaze with Simultaneous Measurements of Accommodation and Convergence

    OpenAIRE

    Hori, Hiroki; Shiomi, Tomoki; Hasegawa, Satoshi; Takada, Hiroki; Omori, Masako; Matsuura, Yasuyuki; Ishio, Hiromu; Miyao, Masaru

    2014-01-01

    Accommodation and convergence were measured simultaneously while subjects viewed 2D and 3D images. The aim was to compare fixation distances between accommodation and convergence in young subjects while they viewed 2D and 3D images. Measurements were made three times, 40 seconds each, using 2D and 3D images. The result suggests that ocular functions during viewing of 3D images are very similar to those during natural viewing. Previously established and widely used theories, such that within a...

  5. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    Science.gov (United States)

    2015-06-01

    development, computer rendered 3D videos were created in order to test and debug the algorithm. Computer rendered videos allow full control of all the...printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...CC.ImageSize(1)); Y=[Y,y]; X=[X,x]; end B. MATLAB RIGID CLOUD Below is provided the code used to create a 3D rigid cloud of points rotating and

  6. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The TerraBlocksTM 3D terrain data format and terrain-block-rendering methodology provides an enabling basis for successful commercial deployment of...

  7. Flight-appropriate 3D Terrain-rendering Toolkit for Synthetic Vision, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics proposes an SBIR Phase I R/R&D effort to develop a key 3D terrain-rendering technology that provides the basis for successful commercial deployment...

  8. Making Things See 3D vision with Kinect, Processing, Arduino, and MakerBot

    CERN Document Server

    Borenstein, Greg

    2012-01-01

    This detailed, hands-on guide provides the technical and conceptual information you need to build cool applications with Microsoft's Kinect, the amazing motion-sensing device that enables computers to see. Through half a dozen meaty projects, you'll learn how to create gestural interfaces for software, use motion capture for easy 3D character animation, 3D scanning for custom fabrication, and many other applications. Perfect for hobbyists, makers, artists, and gamers, Making Things See shows you how to build every project with inexpensive off-the-shelf components, including the open source P

  9. In-line 3D print failure detection using computer vision

    DEFF Research Database (Denmark)

    Lyngby, Rasmus Ahrenkiel; Wilm, Jakob; Eiríksson, Eyþór Rúnar

    2017-01-01

    Here we present our findings on a novel real-time vision system that allows for automatic detection of failure conditions that are considered outside of nominal operation. These failure modes include warping, build plate delamination and extrusion failure. Our system consists of a calibrated came...

  10. Stereo Vision and 3D Reconstruction on a Distributed Memory System

    NARCIS (Netherlands)

    Kuijpers, N.H.L.; Paar, G.; Lukkien, J.J.

    1996-01-01

    An important research topic in image processing is stereo vision. The objective is to compute a 3-dimensional representation of some scenery from two 2-dimensional digital images. Constructing a 3-dimensional representation involves finding pairs of pixels from the two images which correspond to the

  11. A 3D vision system for the measurement of the rate of spread and the height of fire fronts

    International Nuclear Information System (INIS)

    Rossi, L; Molinier, T; Tison, Y; Pieri, A; Akhloufi, M

    2010-01-01

    This paper presents a three-dimensional (3D) vision-based instrumentation system for the measurement of the rate of spread and height of complex fire fronts. The proposed 3D imaging system is simple, does not require calibration, is easily deployable in indoor and outdoor environments and can handle complex fire fronts. New approaches for measuring the position, the rate of spread and the height of a fire front during its propagation are introduced. Experiments were conducted in indoor and outdoor conditions with fires of different scales. Linear and curvilinear fire front spreading were studied. The obtained results are promising and show the interesting performance of the proposed system in operational and complex fire scenarios

  12. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision.

    Science.gov (United States)

    Gillespie-Gallery, Hanna; Konstantakopoulou, Evgenia; Harlow, Jonathan A; Barbur, John L

    2013-09-09

    It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance, and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. We recruited 95 participants aged 20 to 85 years. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C optotype were measured using a 4-alternative, forced-choice (4AFC) procedure at screen luminances from 34 to 0.12 cd/m(2) at the fovea and parafovea (0° and ±4°). Pupil size was measured continuously. The Health of the Retina index (HRindex) was computed to capture the loss of contrast sensitivity with decreasing light level. Participants were excluded if they exhibited performance outside the normal limits of interocular differences or HRindex values, or signs of ocular disease. Parafoveal contrast thresholds showed a steeper decline and higher correlation with age at the parafovea than the fovea. Of participants with clinical signs of ocular disease, 83% had HRindex values outside the normal limits. Binocular summation of contrast signals declined with age, independent of interocular differences. The HRindex worsens more rapidly with age at the parafovea, consistent with histologic findings of rod loss and its link to age-related degenerative disease of the retina. The HRindex and interocular differences could be used to screen for and separate the earliest stages of subclinical disease from changes caused by normal aging.

  13. Fast and flexible 3D object recognition solutions for machine vision applications

    Science.gov (United States)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  14. Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision

    Science.gov (United States)

    Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.

    2018-01-01

    The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.

  15. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    Science.gov (United States)

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  17. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  18. Uav and Computer Vision, Detection of Infrastructure Losses and 3d Modeling

    Science.gov (United States)

    Barrile, V.; Bilotta, G.; Nunnari, A.

    2017-11-01

    The degradation of buildings, or rather the decline of their initial performances following external agents both natural (cold-thaw, earthquake, salt, etc.) and artificial (industrial field, urban setting, etc.), in the years lead to the necessity of developing Non-Destructive Testing (NDT) intended to give useful information for an explanation of a potential deterioration without damaging the state of buildings. An accurate examination of damages, of the repeat of cracks in condition of similar stress, indicate the existence of principles that control the creation of these events. There is no doubt that a precise visual analysis is at the bottom of a correct evaluation of the building. This paper deals with the creation of 3D models based on the capture of digital images, through autopilot flight UAV, for civil buildings situated on the area of Reggio Calabria. The following elaboration is done thanks to the use of commercial software, based on specific algorithms of the Structure from Motion (SfM) technique. SfM represents an important progress in the aerial and terrestrial survey field obtaining results, in terms of time and quality, comparable to those achievable through more traditional data capture methodologies.

  19. UAV AND COMPUTER VISION, DETECTION OF INFRASTRUCTURE LOSSES AND 3D MODELING

    Directory of Open Access Journals (Sweden)

    V. Barrile

    2017-11-01

    Full Text Available The degradation of buildings, or rather the decline of their initial performances following external agents both natural (cold-thaw, earthquake, salt, etc. and artificial (industrial field, urban setting, etc., in the years lead to the necessity of developing Non-Destructive Testing (NDT intended to give useful information for an explanation of a potential deterioration without damaging the state of buildings. An accurate examination of damages, of the repeat of cracks in condition of similar stress, indicate the existence of principles that control the creation of these events. There is no doubt that a precise visual analysis is at the bottom of a correct evaluation of the building. This paper deals with the creation of 3D models based on the capture of digital images, through autopilot flight UAV, for civil buildings situated on the area of Reggio Calabria. The following elaboration is done thanks to the use of commercial software, based on specific algorithms of the Structure from Motion (SfM technique. SfM represents an important progress in the aerial and terrestrial survey field obtaining results, in terms of time and quality, comparable to those achievable through more traditional data capture methodologies.

  20. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  1. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  2. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  3. Automatic Quality Inspection of Percussion Cap Mass Production by Means of 3D Machine Vision and Machine Learning Techniques

    Science.gov (United States)

    Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.

    The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.

  4. Aquilion ONE / ViSION Edition CT scanner realizing 3D dynamic observation with low-dose scanning

    International Nuclear Information System (INIS)

    Kazama, Masahiro; Saito, Yasuo

    2015-01-01

    Computed tomography (CT) scanners have been continuously advancing as essential diagnostic imaging equipment for the diagnosis and treatment of a variety of diseases, including the three major disease classes of cerebrovascular disease, cardiovascular disease, and cancer. Through the development of helical CT scanners and multislice CT scanners, Toshiba Medical Systems Corporation has developed the Aquilion ONE, a CT scanner with a scanning range of up to 160 mm per rotation that can obtain three-dimensional (3D) images of the brain, heart, and other organs in a single rotation. We have now developed the Aquilion ONE / ViSION Edition, a next-generation 320-row multislice CT scanner incorporating the latest technologies that achieves a shorter scanning time and significant reduction in dose compared with conventional products. This product with its low-dose scanning technology will contribute to the practical realization of new diagnosis and treatment modalities employing four-dimensional (4D) data based on 3D dynamic observations through continuous rotations. (author)

  5. Development of an auto-welding system for CRD nozzle repair welds using a 3D laser vision sensor

    International Nuclear Information System (INIS)

    Park, K.; Kim, Y.; Byeon, J.; Sung, K.; Yeom, C.; Rhee, S.

    2007-01-01

    A control rod device (CRD) nozzle attaches to the hemispherical surface of a reactor head with J-groove welding. Primary water stress corrosion cracking (PWSCC) causes degradation in these welds, which requires that these defect areas be repaired. To perform this repair welding automatically on a complicated weld groove shape, an auto-welding system was developed incorporating a laser vision sensor that measures the 3-dimensional (3D) shape of the groove and a weld-path creation program that calculates the weld-path parameters. Welding trials with a J-groove workpiece were performed to establish a basis for developing this auto-welding system. Because the reactor head is placed on a lay down support, the outer-most region of the CRD nozzle has restricted access. Due to this tight space, several parameters of the design, such as size, weight and movement of the auto-welding system, had to be carefully considered. The cross section of the J-groove weld is basically an oval shape where the included angle of the J-groove ranges from 0 to 57 degrees. To measure the complex shape, we used double lasers coupled to a single charge coupled device (CCD) camera. We then developed a program to generate the weld-path parameters using the measured 3D shape as a basis. The program has the ability to determine the first and final welding positions and to calculate all weld-path parameters. An optimized image-processing algorithm was applied to resolve noise interference and diffused reflection of the joint surfaces. The auto-welding system is composed of a 4-axis manipulator, gas tungsten arc welding (GTAW) power supply, an optimized designed and manufactured GTAW torch and a 3D laser vision sensor. Through welding trials with 0 and 38-degree included-angle workpieces with both J-groove and U-groove weld, the performance of this auto-welding system was qualified for field application

  6. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  7. Estimated Prevalence of Monocular Blindness and Monocular ...

    African Journals Online (AJOL)

    with MB/MSVI; among the 109 (51%) children with MB/MSVI that had a known etiology, trauma. Table 1: Major anatomical site of monocular blindness and monocular severe visual impairment in children. Anatomical cause. Total (%). Corneal scar. 89 (42). Whole globe. 43 (20). Lens. 42 (19). Amblyopia. 16 (8). Retina. 9 (4).

  8. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    Science.gov (United States)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  9. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  10. Brightness, hue, and saturation in photopic vision: a result of luminance and wavelength in the cellular phase-grating optical 3D chip of the inverted retina

    Science.gov (United States)

    Lauinger, Norbert

    1994-10-01

    In photopic vision, two physical variables (luminance and wavelength) are transformed into three psychological variables (brightness, hue, and saturation). Following on from 3D grating optical explanations of aperture effects (Stiles-Crawford effects SCE I and II), all three variables can be explained via a single 3D chip effect. The 3D grating optical calculations are carried out using the classical von Laue equation and demonstrated using the example of two experimentally confirmed observations in human vision: saturation effects for monochromatic test lights between 485 and 510 nm in the SCE II and the fact that many test lights reverse their hue shift in the SCE II when changing from moderate to high luminances compared with that on changing from low to medium luminances. At the same time, information is obtained on the transition from the trichromatic color system in the retina to the opponent color system.

  11. Visual Suppression of Monocularly Presented Symbology Against a Fused Background in a Simulation and Training Environment

    National Research Council Canada - National Science Library

    Winterbottom, Marc D; Patterson, Robert; Pierce, Byron J; Taylor, Amanda

    2006-01-01

    .... This may create interocular differences in image characteristics that could disrupt binocular vision by provoking visual suppression, thus reducing visibility of the background scene, monocular symbology...

  12. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  13. 3D vision improves outcomes in early cervical cancer treated with laparoscopic type B radical hysterectomy and pelvic lymphadenectomy.

    Science.gov (United States)

    Raspagliesi, Francesco; Bogani, Giorgio; Martinelli, Fabio; Signorelli, Mauro; Scaffa, Cono; Sabatucci, Ilaria; Lorusso, Domenica; Ditto, Antonino

    2017-01-21

    To evaluate the alterations on surgical outcomes after of the implementation of 3D laparoscopic technology for the surgical treatment of early-stage cervical carcinoma. Data of patients undergoing type B radical hysterectomy (with or without bilateral salpingo-oophorectomy) and pelvic lymphadenectomy via 3D laparoscopy were compared with a historical cohort of patients undergoing type B radical hysterectomy via conventional laparoscopy. Complications (within 60 days) were graded per the Accordion severity system. Data of 75 patients were studied: 15 (20%) and 60 (80%) patients undergoing surgery via 3D laparoscopy and conventional laparoscopy, respectively. Baseline patient characteristics as well as pathologic findings were similar between groups (p>0.1). Patients undergoing 3D laparoscopy experienced a trend toward shorter operative time than patients undergoing conventional laparoscopy (176.7 ± 74.6 vs 215.9 ± 61.6 minutes; p = 0.09). Similarly, patients undergoing 3D laparoscopic radical hysterectomy experienced shorter length of hospital stay (2 days, range 2-6, vs 4 days, range 3-11; p<0.001) in comparison to patients in the control group, while no difference in estimated blood loss was observed (p = 0.88). No between-group difference in complication rate was observed. 3D technology is a safe and effective way to perform type B radical hysterectomy and pelvic node dissection in early-stage cervical cancer. Further large prospective studies are warranted in order to assess the cost-effectiveness of the introduction of 3D technology in comparison to robotic assisted surgery.

  14. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  15. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    Directory of Open Access Journals (Sweden)

    Il Jae Lee

    2009-09-01

    Full Text Available In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor.

  16. Interlopers 3D: experiences designing a stereoscopic game

    Science.gov (United States)

    Weaver, James; Holliman, Nicolas S.

    2014-03-01

    Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.

  17. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    Science.gov (United States)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  18. Assessment of Laparoscopic Skills Performance: 2D Versus 3D Vision and Classic Instrument Versus New Hand-Held Robotic Device for Laparoscopy.

    Science.gov (United States)

    Leite, Mariana; Carvalho, Ana F; Costa, Patrício; Pereira, Ricardo; Moreira, Antonio; Rodrigues, Nuno; Laureano, Sara; Correia-Pinto, Jorge; Vilaça, João L; Leão, Pedro

    2016-02-01

    Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons' performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. Each participant performed 3 laparoscopic tasks-Peg transfer, Wire chaser, Knot-in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures. © The Author(s) 2015.

  19. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  20. Monocular Elevation Deficiency - Double Elevator Palsy

    Science.gov (United States)

    ... Español Condiciones Chinese Conditions Monocular Elevation Deficiency/ Double Elevator Palsy En Español Read in Chinese What is monocular elevation deficiency (Double Elevator Palsy)? Monocular Elevation Deficiency, also known by the ...

  1. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    Science.gov (United States)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  2. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  3. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  4. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  5. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    Science.gov (United States)

    2013-10-18

    area of 3D point estimation of flapping- wing UASs. The benefits of designing and developing such a system is instrumental in researching various...series of successive states until a given name is reached such as: Object Animate Animal Mammal Dog Labrador Chocolate (Brown) Male Name...are many benefits to us- ing SIFT in tracking. It detects features that are invariant to image scale and rotation, and are shown to provide robust

  6. IMPLEMENTATION OF 3D TOOLS AND IMMERSIVE EXPERIENCE INTERACTION FOR SUPPORTING LEARNING IN A LIBRARY-ARCHIVE ENVIRONMENT. VISIONS AND CHALLENGES

    Directory of Open Access Journals (Sweden)

    A. Angeletaki

    2013-07-01

    Full Text Available In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.

  7. Implementation of 3d Tools and Immersive Experience Interaction for Supporting Learning in a Library-Archive Environment. Visions and Challenges

    Science.gov (United States)

    Angeletaki, A.; Carrozzino, M.; Johansen, S.

    2013-07-01

    In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.

  8. Title: Vision of the Reconstruction of Destructed Monuments of Palmyra (3D) as a Step to Rehabiliate and Preserve the Wholesite

    Science.gov (United States)

    Arkawi, A.

    2017-08-01

    Syria is one of the world's most impressive Cultural Heritages in terms of the number and historical significance of its monuments. Palmyra lies in the heart of Syria, an oasis in the midst of the arid desert.it could be considered as a part of this human heritage. In1980 was registered on the world and national heritage list for its huge historical importance. In addition, it was the focus of many studies and researches in the fields of restoration. Then the disaster happened, many monuments were demolished, temple of Ba'al, temple of Bael-shameen, Arch of triumph and the Castle. Lately the Tetrapylon and the Stag. Every Syrian was hurt, the whole world was hurt. The destruction of the city caused its people to become homeless and Palmyra was no longer the oasis we know. We felt pain, so we wanted to make a move, a step forward, to present a work that expresses our love for Palmyra, we organized Palmyra workshop to provide a vision for the reconstruction and revival of the historic site importance. Visions with using new idea & new technology. Palmyra historical areas are considered a large open museum for heritage through history, which is the reason to treat these area as a historical protection precinct and give a vision, ideas, suggestions to the future of Palmary as a first step to preserve the historical buildings& the archeological park.

  9. TITLE: VISION OF THE RECONSTRUCTION OF DESTRUCTED MONUMENTS OF PALMYRA (3D AS A STEP TO REHABILIATE AND PRESERVE THE WHOLESITE

    Directory of Open Access Journals (Sweden)

    A. Arkawi

    2017-08-01

    Full Text Available Syria is one of the world’s most impressive Cultural Heritages in terms of the number and historical significance of its monuments. Palmyra lies in the heart of Syria, an oasis in the midst of the arid desert.it could be considered as a part of this human heritage. In1980 was registered on the world and national heritage list for its huge historical importance. In addition, it was the focus of many studies and researches in the fields of restoration. Then the disaster happened, many monuments were demolished, temple of Ba’al, temple of Bael-shameen, Arch of triumph and the Castle. Lately the Tetrapylon and the Stag. Every Syrian was hurt, the whole world was hurt. The destruction of the city caused its people to become homeless and Palmyra was no longer the oasis we know. We felt pain, so we wanted to make a move, a step forward, to present a work that expresses our love for Palmyra, we organized Palmyra workshop to provide a vision for the reconstruction and revival of the historic site importance. Visions with using new idea & new technology. Palmyra historical areas are considered a large open museum for heritage through history, which is the reason to treat these area as a historical protection precinct and give a vision, ideas, suggestions to the future of Palmary as a first step to preserve the historical buildings& the archeological park.

  10. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    Science.gov (United States)

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  11. Anisometropia and ptosis in patients with monocular elevation deficiency

    International Nuclear Information System (INIS)

    Zafar, S.N.; Islam, F.; Khan, A.M.

    2016-01-01

    Objective: To determine the effect of ptosis on the refractive error in eyes having monocular elevation deficiency Place and Duration of Study: Al-Shifa Trust Eye Hospital, Rawalpindi, from January 2011 to January 2014. Methodology: Visual acuity, refraction, orthoptic assessment and ptosis evaluation of all patients having monocular elevation deficiency (MED) were recorded. Shapiro-Wilk test was used for tests of normality. Median and interquartile range (IQR) was calculated for the data. Non-parametric variables were compared, using the Wilcoxon signed ranks test. P-values of <0.05 were considered significant. Results: A total of of 41 MED patients were assessed during the study period. Best corrected visual acuity (BCVA) and refractive error was compared between the eyes having MED and the unaffected eyes of the same patient. The refractive status of patients having ptosis with MED were also compared with those having MED without ptosis. Astigmatic correction and vision had significant difference between both the eyes of the patients. Vision was significantly different between the two eyes of patients in both the groups having either presence or absence of ptosis (p=0.04 and p < 0.001, respectively). Conclusion: Significant difference in vision and anisoastigmatism was noted between the two eyes of patients with MED in this study. The presence or absence of ptosis affected the vision but did not have a significant effect on the spherical equivalent (SE) and astigmatic correction between both the eyes. (author)

  12. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2016-01-01

    Monocular vision is increasingly used in Micro Air Vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicles’ movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  13. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2017-01-01

    Monocular vision is increasingly used in micro air vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicle movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  14. Monocular Perceptual Deprivation from Interocular Suppression Temporarily Imbalances Ocular Dominance.

    Science.gov (United States)

    Kim, Hyun-Woong; Kim, Chai-Youn; Blake, Randolph

    2017-03-20

    Early visual experience sculpts neural mechanisms that regulate the balance of influence exerted by the two eyes on cortical mechanisms underlying binocular vision [1, 2], and experience's impact on this neural balancing act continues into adulthood [3-5]. One recently described, compelling example of adult neural plasticity is the effect of patching one eye for a relatively short period of time: contrary to intuition, monocular visual deprivation actually improves the deprived eye's competitive advantage during a subsequent period of binocular rivalry [6-8], the robust form of visual competition prompted by dissimilar stimulation of the two eyes [9, 10]. Neural concomitants of this improvement in monocular dominance are reflected in measurements of brain responsiveness following eye patching [11, 12]. Here we report that patching an eye is unnecessary for producing this paradoxical deprivation effect: interocular suppression of an ordinarily visible stimulus being viewed by one eye is sufficient to produce shifts in subsequent predominance of that eye to an extent comparable to that produced by patching the eye. Moreover, this imbalance in eye dominance can also be induced by prior, extended viewing of two monocular images differing only in contrast. Regardless of how shifts in eye dominance are induced, the effect decays once the two eyes view stimuli equal in strength. These novel findings implicate the operation of interocular neural gain control that dynamically adjusts the relative balance of activity between the two eyes [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  16. Generalized Hough transform based time invariant action recognition with 3D pose information

    Science.gov (United States)

    Muench, David; Huebner, Wolfgang; Arens, Michael

    2014-10-01

    Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database.

  17. 3D Animation Essentials

    CERN Document Server

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  18. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  19. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  20. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... function in children, nor are there persuasive, conclusive theories on how 3-D digital products could cause damage in children with healthy eyes. The development of normal 3-D vision in children is ...

  1. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... techniques used to create the 3-D effect can confuse or overload the brain, causing some people ... images. That does not mean that vision disorders can be caused by 3-D digital products. However, ...

  2. Flash 3D Rendezvous and Docking Sensor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — 3D Flash Ladar is a breakthrough technology for many emerging and existing 3D vision areas, and sensor improvements will have an impact on nearly all these fields....

  3. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... function in children, nor are there persuasive, conclusive theories on how 3-D digital products could cause ... or other conditions that persistently inhibit focusing, depth perception or normal 3-D vision, would have difficulty ...

  4. EUROPEANA AND 3D

    Directory of Open Access Journals (Sweden)

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  5. The monocular visual imaging technology model applied in the airport surface surveillance

    Science.gov (United States)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  6. Perspectives on Materials Science in 3D

    DEFF Research Database (Denmark)

    Juul Jensen, Dorte

    2012-01-01

    Materials characterization in 3D has opened a new era in materials science, which is discussed in this paper. The original motivations and visions behind the development of one of the new 3D techniques, namely the three dimensional x-ray diffraction (3DXRD) method, are presented and the route...... to its implementation is described. The present status of materials science in 3D is illustrated by examples related to recrystallization. Finally, challenges and suggestions for the future success for 3D Materials Science relating to hardware evolution, data analysis, data exchange and modeling...

  7. Open 3D Projects

    Directory of Open Access Journals (Sweden)

    Felician ALECU

    2010-01-01

    Full Text Available Many professionals and 3D artists consider Blender as being the best open source solution for 3D computer graphics. The main features are related to modeling, rendering, shading, imaging, compositing, animation, physics and particles and realtime 3D/game creation.

  8. Refined 3d-3d correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Alday, Luis F.; Genolini, Pietro Benetti; Bullimore, Mathew; Loon, Mark van [Mathematical Institute, University of Oxford, Andrew Wiles Building,Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2017-04-28

    We explore aspects of the correspondence between Seifert 3-manifolds and 3d N=2 supersymmetric theories with a distinguished abelian flavour symmetry. We give a prescription for computing the squashed three-sphere partition functions of such 3d N=2 theories constructed from boundary conditions and interfaces in a 4d N=2{sup ∗} theory, mirroring the construction of Seifert manifold invariants via Dehn surgery. This is extended to include links in the Seifert manifold by the insertion of supersymmetric Wilson-’t Hooft loops in the 4d N=2{sup ∗} theory. In the presence of a mass parameter for the distinguished flavour symmetry, we recover aspects of refined Chern-Simons theory with complex gauge group, and in particular construct an analytic continuation of the S-matrix of refined Chern-Simons theory.

  9. A 3d-3d appetizer

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Du; Ye, Ke [Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA, 91125 (United States)

    2016-11-02

    We test the 3d-3d correspondence for theories that are labeled by Lens spaces. We find a full agreement between the index of the 3d N=2 “Lens space theory” T[L(p,1)] and the partition function of complex Chern-Simons theory on L(p,1). In particular, for p=1, we show how the familiar S{sup 3} partition function of Chern-Simons theory arises from the index of a free theory. For large p, we find that the index of T[L(p,1)] becomes a constant independent of p. In addition, we study T[L(p,1)] on the squashed three-sphere S{sub b}{sup 3}. This enables us to see clearly, at the level of partition function, to what extent G{sub ℂ} complex Chern-Simons theory can be thought of as two copies of Chern-Simons theory with compact gauge group G.

  10. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  11. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  12. 3D virtuel udstilling

    DEFF Research Database (Denmark)

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  13. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  14. Underwater 3D filming

    Directory of Open Access Journals (Sweden)

    Roberto Rinaldi

    2014-12-01

    Full Text Available After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be.

  15. Hierarchical online appearance-based tracking for 3D head pose, eyebrows, lips, eyelids, and irises

    NARCIS (Netherlands)

    Orozco, Javier; Rudovic, Ognjen; Gonzalez Garcia, Jordi; Pantic, Maja

    In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can

  16. 3D Graphics with Spreadsheets

    Directory of Open Access Journals (Sweden)

    Jan Benacka

    2009-06-01

    Full Text Available In the article, the formulas for orthographic parallel projection of 3D bodies on computer screen are derived using secondary school vector algebra. The spreadsheet implementation is demonstrated in six applications that project bodies with increasing intricacy – a convex body (cube with non-solved visibility, convex bodies (cube, chapel with solved visibility, a coloured convex body (chapel with solved visibility, and a coloured non-convex body (church with solved visibility. The projections are revolvable in horizontal and vertical plane, and they are changeable in size. The examples show an unusual way of using spreadsheets as a 3D computer graphics tool. The applications can serve as a simple introduction to the general principles of computer graphics, to the graphics with spreadsheets, and as a tool for exercising stereoscopic vision. The presented approach is usable at visualising 3D scenes within some topics of secondary school curricula as solid geometry (angles and distances of lines and planes within simple bodies or analytic geometry in space (angles and distances of lines and planes in E3, and even at university level within calculus at visualising graphs of z = f(x,y functions. Examples are pictured.

  17. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 3-D effect can confuse or overload the brain, causing some people discomfort even if they have normal vision. Taking a break from viewing usually relieves the discomfort. More on computer use and your eyes . Children and 3-D Technology Following the lead of Nintendo, several 3-D ...

  18. Underwater 3D filming

    OpenAIRE

    Rinaldi, Roberto

    2014-01-01

    After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” ) and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Unde...

  19. Blender 3D cookbook

    CERN Document Server

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  20. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  1. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, Jan J.; Albertazzi, Liliana; van Doorn, Andrea J.; van Ee, Raymond; van de Grind, Wim A.; Kappers, Astrid M L; Lappin, Joe S.; Farley Norman, J.; (Stijn) Oomes, A. H J; te Pas, Susan P.; Phillips, Flip; Pont, Sylvia C.; Richards, Whitman A.; Todd, James T.; Verstraten, Frans A J; de Vries, Sjoerd

    The issue of the existence of planes-understood as the carriers of a nexus of straight lines-in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  2. DELTA 3D PRINTER

    Directory of Open Access Journals (Sweden)

    ȘOVĂILĂ Florin

    2016-07-01

    Full Text Available 3D printing is a very used process in industry, the generic name being “rapid prototyping”. The essential advantage of a 3D printer is that it allows the designers to produce a prototype in a very short time, which is tested and quickly remodeled, considerably reducing the required time to get from the prototype phase to the final product. At the same time, through this technique we can achieve components with very precise forms, complex pieces that, through classical methods, could have been accomplished only in a large amount of time. In this paper, there are presented the stages of a 3D model execution, also the physical achievement after of a Delta 3D printer after the model.

  3. Professional Papervision3D

    CERN Document Server

    Lively, Michael

    2010-01-01

    Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository.

  4. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  5. 3D geometric phase analysis and its application in 3D microscopic morphology measurement

    Science.gov (United States)

    Zhu, Ronghua; Shi, Wenxiong; Cao, Quankun; Liu, Zhanwei; Guo, Baoqiao; Xie, Huimin

    2018-04-01

    Although three-dimensional (3D) morphology measurement has been widely applied on the macro-scale, there is still a lack of 3D measurement technology on the microscopic scale. In this paper, a microscopic 3D measurement technique based on the 3D-geometric phase analysis (GPA) method is proposed. In this method, with machine vision and phase matching, the traditional GPA method is extended to three dimensions. Using this method, 3D deformation measurement on the micro-scale can be realized using a light microscope. Simulation experiments were conducted in this study, and the results demonstrate that the proposed method has a good anti-noise ability. In addition, the 3D morphology of the necking zone in a tensile specimen was measured, and the results demonstrate that this method is feasible.

  6. Wearable 3D measurement

    Science.gov (United States)

    Manabe, Yoshitsugu; Imura, Masataka; Tsuchiya, Masanobu; Yasumuro, Yoshihiro; Chihara, Kunihiro

    2003-01-01

    Wearable 3D measurement realizes to acquire 3D information of an objects or an environment using a wearable computer. Recently, we can send voice and sound as well as pictures by mobile phone in Japan. Moreover it will become easy to capture and send data of short movie by it. On the other hand, the computers become compact and high performance. And it can easy connect to Internet by wireless LAN. Near future, we can use the wearable computer always and everywhere. So we will be able to send the three-dimensional data that is measured by wearable computer as a next new data. This paper proposes the measurement method and system of three-dimensional data of an object with the using of wearable computer. This method uses slit light projection for 3D measurement and user"s motion instead of scanning system.

  7. 3D Digital Modelling

    DEFF Research Database (Denmark)

    Hundebøl, Jesper

    wave of new building information modelling tools demands further investigation, not least because of industry representatives' somewhat coarse parlance: Now the word is spreading -3D digital modelling is nothing less than a revolution, a shift of paradigm, a new alphabet... Research qeustions. Based...... on empirical probes (interviews, observations, written inscriptions) within the Danish construction industry this paper explores the organizational and managerial dynamics of 3D Digital Modelling. The paper intends to - Illustrate how the network of (non-)human actors engaged in the promotion (and arrest) of 3...... important to appreciate the analysis. Before turning to the presentation of preliminary findings and a discussion of 3D digital modelling, it begins, however, with an outline of industry specific ICT strategic issues. Paper type. Multi-site field study...

  8. Fiducial-based monocular 3D displacement measurement of breakwater armour unit models.

    CSIR Research Space (South Africa)

    Vieira, R

    2008-11-01

    Full Text Available This paper presents a fiducial-based approach to monitoring the movement of breakwater armour units in a model hall environment. Target symbols with known dimensions are attached to the physical models, allowing the recovery of three...

  9. 3D ARCHITECTURAL VIDEOMAPPING

    Directory of Open Access Journals (Sweden)

    R. Catanese

    2013-07-01

    Full Text Available 3D architectural mapping is a video projection technique that can be done with a survey of a chosen building in order to realize a perfect correspondence between its shapes and the images in projection. As a performative kind of audiovisual artifact, the real event of the 3D mapping is a combination of a registered video animation file with a real architecture. This new kind of visual art is becoming very popular and its big audience success testifies new expressive chances in the field of urban design. My case study has been experienced in Pisa for the Luminara feast in 2012.

  10. Interaktiv 3D design

    DEFF Research Database (Denmark)

    Villaume, René Domine; Ørstrup, Finn Rude

    2002-01-01

    Projektet undersøger potentialet for interaktiv 3D design via Internettet. Arkitekt Jørn Utzons projekt til Espansiva blev udviklet som et byggesystem med det mål, at kunne skabe mangfoldige planmuligheder og mangfoldige facade- og rumudformninger. Systemets bygningskomponenter er digitaliseret som...... 3D elementer og gjort tilgængelige. Via Internettet er det nu muligt at sammenstille og afprøve en uendelig  række bygningstyper som  systemet blev tænkt og udviklet til....

  11. 3D Projection Installations

    DEFF Research Database (Denmark)

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article ...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  12. Herramientas SIG 3D

    Directory of Open Access Journals (Sweden)

    Francisco R. Feito Higueruela

    2010-04-01

    Full Text Available Applications of Geographical Information Systems on several Archeology fields have been increasing during the last years. Recent avances in these technologies make possible to work with more realistic 3D models. In this paper we introduce a new paradigm for this system, the GIS Thetrahedron, in which we define the fundamental elements of GIS, in order to provide a better understanding of their capabilities. At the same time the basic 3D characteristics of some comercial and open source software are described, as well as the application to some samples on archeological researchs

  13. Bootstrapping 3D fermions

    Energy Technology Data Exchange (ETDEWEB)

    Iliesiu, Luca [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States); Kos, Filip; Poland, David [Department of Physics, Yale University, New Haven, CT 06520 (United States); Pufu, Silviu S. [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States); Simmons-Duffin, David [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Yacoby, Ran [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States)

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions 〈ψψψψ〉 in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ×ψ OPE, and also on the central charge C{sub T}. We observe features in our bounds that coincide with scaling dimensions in the Gross-Neveu models at large N. We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  14. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  15. Shaping 3-D boxes

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2011-01-01

    Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data...

  16. 3D Wire 2015

    DEFF Research Database (Denmark)

    Jordi, Moréton; F, Escribano; J. L., Farias

    This document is a general report on the implementation of gamification in 3D Wire 2015 event. As the second gamification experience in this event, we have delved deeply in the previous objectives (attracting public areas less frequented exhibition in previous years and enhance networking) and have...

  17. 3D Harmonic Echocardiography:

    NARCIS (Netherlands)

    M.M. Voormolen (Marco)

    2007-01-01

    textabstractThree dimensional (3D) echocardiography has recently developed from an experimental technique in the ’90 towards an imaging modality for the daily clinical practice. This dissertation describes the considerations, implementation, validation and clinical application of a unique

  18. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. On so-called paradoxical monocular stereoscopy.

    Science.gov (United States)

    Koenderink, J J; van Doorn, A J; Kappers, A M

    1994-01-01

    Human observers are apparently well able to judge properties of 'three-dimensional objects' on the basis of flat pictures such as photographs of physical objects. They obtain this 'pictorial relief' without much conscious effort and with little interference from the (flat) picture surface. Methods for 'magnifying' pictorial relief from single pictures include viewing instructions as well as a variety of monocular and binocular 'viewboxes'. Such devices are reputed to yield highly increased pictorial depth, though no methodologies for the objective verification of such claims exist. A binocular viewbox has been reconstructed and pictorial relief under monocular, 'synoptic', and natural binocular viewing is described. The results corroborate and go beyond early introspective reports and turn out to pose intriguing problems for modern research.

  20. Distributed Monocular SLAM for Indoor Map Building

    OpenAIRE

    Ruwan Egodagamage; Mihran Tuceryan

    2017-01-01

    Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps,...

  1. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... Patients and Public Technicians and Nurses Senior Ophthalmologists Young ... can be caused by 3-D digital products. However, children (or adults) who have these vision disorders may be more ...

  2. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... viewer has a problem with focusing or depth perception. Also, the techniques used to create the 3- ... or other conditions that persistently inhibit focusing, depth perception or normal 3-D vision, would have difficulty ...

  3. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 3-D digital images. Find an Ophthalmologist Advanced Search Ask an Ophthalmologist Browse Answers Free Newsletter Get ophthalmologist-reviewed tips and information about eye health and preserving your vision. Privacy ...

  4. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... normal 3-D vision in children is stimulated as they use their eyes in day-to-day ... years. However, children who have eye conditions such as amblyopia (an imbalance in visual strength between the ...

  5. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... issued warnings about children's use of their new products. The original Nintendo warning, in late 2010, urged ... see the images when using 3-D digital products, this may indicate a vision or eye disorder. ...

  6. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... 6 years from prolonged viewing of the device's digital images, in order to avoid possible damage to ... clearly see the images when using 3-D digital products, this may indicate a vision or eye ...

  7. 3D Surgical Simulation

    OpenAIRE

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2010-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive ...

  8. Tangible 3D Modelling

    DEFF Research Database (Denmark)

    Hejlesen, Aske K.; Ovesen, Nis

    2012-01-01

    This paper presents an experimental approach to teaching 3D modelling techniques in an Industrial Design programme. The approach includes the use of tangible free form models as tools for improving the overall learning. The paper is based on lecturer and student experiences obtained through...... facilitated discussions during the course as well as through a survey distributed to the participating students. The analysis of the experiences shows a mixed picture consisting of both benefits and limits to the experimental technique. A discussion about the applicability of the technique and about...

  9. Review of 3d GIS Data Fusion Methods and Progress

    Science.gov (United States)

    Hua, Wei; Hou, Miaole; Hu, Yungang

    2018-04-01

    3D data fusion is a research hotspot in the field of computer vision and fine mapping, and plays an important role in fine measurement, risk monitoring, data display and other processes. At present, the research of 3D data fusion in the field of Surveying and mapping focuses on the 3D model fusion of terrain and ground objects. This paper summarizes the basic methods of 3D data fusion of terrain and ground objects in recent years, and classified the data structure and the establishment method of 3D model, and some of the most widely used fusion methods are analysed and commented.

  10. REVIEW OF 3D GIS DATA FUSION METHODS AND PROGRESS

    Directory of Open Access Journals (Sweden)

    W. Hua

    2018-04-01

    Full Text Available 3D data fusion is a research hotspot in the field of computer vision and fine mapping, and plays an important role in fine measurement, risk monitoring, data display and other processes. At present, the research of 3D data fusion in the field of Surveying and mapping focuses on the 3D model fusion of terrain and ground objects. This paper summarizes the basic methods of 3D data fusion of terrain and ground objects in recent years, and classified the data structure and the establishment method of 3D model, and some of the most widely used fusion methods are analysed and commented.

  11. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  12. Why can't my child see 3D television?

    Science.gov (United States)

    Creavin, Alexandra L; Creavin, Samuel T; Brown, Raymond D; Harrad, Richard A

    2014-08-01

    A child encountering difficulty in watching three-dimensional (3D) stereoscopic displays could have an underlying ocular disorder. It is therefore valuable to understand the differential diagnoses and so conduct an appropriate clinical assessment to address concerns about poor 3D vision.

  13. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... video games will damage the eyes or visual system. Some people complain of headaches or motion sickness when viewing 3-D, ... damage in children with healthy eyes. The development of normal 3-D vision ... and natural environments, and this development is largely complete by age ...

  14. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... the techniques used to create the 3-D effect can confuse or overload the brain, causing some people discomfort even if they have normal vision. Taking a break from viewing usually relieves the discomfort. More on computer use and your eyes . Children and 3-D ...

  15. 3D Surgical Simulation

    Science.gov (United States)

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  16. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  17. Manifolds for pose tracking from monocular video

    Science.gov (United States)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  18. 3D Data Acquisition Platform for Human Activity Understanding

    Science.gov (United States)

    2016-03-02

    SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and...applications of human activity analysis, and computational optimization of large-scale 3D data. The support for the acquisition of such research

  19. Structured Light-Based 3D Reconstruction System for Plants

    OpenAIRE

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud regi...

  20. Binocular vision in amblyopia : structure, suppression and plasticity

    OpenAIRE

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel Hart

    2014-01-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cor...

  1. Neuroimaging of amblyopia and binocular vision: a review.

    Science.gov (United States)

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them.

  2. Neuroimaging of amblyopia and binocular vision: a review

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-08-01

    Full Text Available Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia. Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarise the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging (fMRI. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence show that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterise the brain response changes associated with these treatments and help devise them.

  3. Stereo using monocular cues within the tensor voting framework.

    Science.gov (United States)

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  4. Combining 3D structure of real video and synthetic objects

    Science.gov (United States)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  5. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations

  6. Distributed Monocular SLAM for Indoor Map Building

    Directory of Open Access Journals (Sweden)

    Ruwan Egodagamage

    2017-01-01

    Full Text Available Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.

  7. Mobile 3D tomograph

    International Nuclear Information System (INIS)

    Illerhaus, Bernhard; Goebbels, Juergen; Onel, Yener; Sauerwein, Christoph

    2008-01-01

    Mobile tomographs often have the problem that high spatial resolution is impossible owing to the position or setup of the tomograph. While the tree tomograph developed by Messrs. Isotopenforschung Dr. Sauerwein GmbH worked well in practice, it is no longer used as the spatial resolution and measuring time are insufficient for many modern applications. The paper shows that the mechanical base of the method is sufficient for 3D CT measurements with modern detectors and X-ray tubes. CT measurements with very good statistics take less than 10 min. This means that mobile systems can be used, e.g. in examinations of non-transportable cultural objects or monuments. Enhancement of the spatial resolution of mobile tomographs capable of measuring in any position is made difficult by the fact that the tomograph has moving parts and will therefore have weight shifts. With the aid of tomographies whose spatial resolution is far higher than the mechanical accuracy, a correction method is presented for direct integration of the Feldkamp algorithm [de

  8. Multi-view and 3D deformable part models.

    Science.gov (United States)

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  9. 3D Printing and 3D Bioprinting in Pediatrics.

    Science.gov (United States)

    Vijayavenkataraman, Sanjairaj; Fuh, Jerry Y H; Lu, Wen Feng

    2017-07-13

    Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.

  10. 3D Printing and 3D Bioprinting in Pediatrics

    OpenAIRE

    Vijayavenkataraman, Sanjairaj; Fuh, Jerry Y H; Lu, Wen Feng

    2017-01-01

    Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.

  11. 3D printing for dummies

    CERN Document Server

    Hausman, Kalani Kirk

    2014-01-01

    Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for

  12. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. 3D Volume Rendering and 3D Printing (Additive Manufacturing).

    Science.gov (United States)

    Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T

    2018-07-01

    Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. 3D game environments create professional 3D game worlds

    CERN Document Server

    Ahearn, Luke

    2008-01-01

    The ultimate resource to help you create triple-A quality art for a variety of game worlds; 3D Game Environments offers detailed tutorials on creating 3D models, applying 2D art to 3D models, and clear concise advice on issues of efficiency and optimization for a 3D game engine. Using Photoshop and 3ds Max as his primary tools, Luke Ahearn explains how to create realistic textures from photo source and uses a variety of techniques to portray dynamic and believable game worlds.From a modern city to a steamy jungle, learn about the planning and technological considerations for 3D modelin

  15. 3D asthenopia in horizontal deviation.

    Science.gov (United States)

    Kim, Seung-Hyun; Suh, Young-Woo; Yun, Cheol-Min; Yoo, Eun-Joo; Yeom, Ji-Hyun; Cho, Yoonae A

    2013-05-01

    This study was conducted to investigate the asthenopic symptoms in patients with exotropia and esotropia while watching stereoscopic 3D (S3D) television (TV). A total 77 subjects who more than 9 years of age were enrolled in this study. We divided them into three groups; Thirty-four patients with exodeviation (Exo group), 11 patients with esodeviation (Eso group) and 32 volunteers with normal binocular vision (control group). The S3D images were shown to all patients with S3D high-definition TV for a period of 20 min. Best corrected visual acuity, refractive errors, angle of strabismus, stereopsis test and history of strabismus surgery, were evaluated. After watching S3D TV for 20 min, a survey of subjective symptoms was conducted with a questionnaire to evaluate the degree of S3D perception and asthenopic symptoms such as headache, dizziness and ocular fatigue while watching 3D TV. The mean amounts of deviation in the Exo group and Eso group were 11.2 PD and 7.73PD, respectively. Mean stereoacuity was 102.7 arc sec in the the Exo group and 1389.1 arc sec in the Eso group. In the control group, it was 41.9 arc sec. Twenty-nine patients in the Exo group showed excellent stereopsis (≤60 arc sec at near), but all 11 subjects of the Eso group showed 140 arc sec or worse and showed more decreased 3D perception than the Exo and the control group (p Kruskal-Wallis test). The Exo group reported more eye fatigue (p Kruskal-Wallis test) than the Eso and the control group. However, the scores of ocular fatigue in the patients who had undergone corrective surgery were less than in the patients who had not in the Exo group (p Kruskal-Wallis test) and the amount of exodeviation was not correlated with the asthenopic symptoms (dizziness, r = 0.034, p = 0.33; headache, r = 0.320, p = 0.119; eye fatigue, r = 0.135, p = 0.519, Spearman rank correlation test, respectively). Symptoms of 3D asthenopia were related to the presence of exodeviation but not to esodeviation. This may

  16. Grey and white matter changes in children with monocular amblyopia: voxel-based morphometry and diffusion tensor imaging study.

    Science.gov (United States)

    Li, Qian; Jiang, Qinying; Guo, Mingxia; Li, Qingji; Cai, Chunquan; Yin, Xiaohui

    2013-04-01

    To investigate the potential morphological alterations of grey and white matter in monocular amblyopic children using voxel-based morphometry (VBM) and diffusion tensor imaging (DTI). A total of 20 monocular amblyopic children and 20 age-matched controls were recruited. Whole-brain MRI scans were performed after a series of ophthalmologic exams. The imaging data were processed and two-sample t-tests were employed to identify group differences in grey matter volume (GMV), white matter volume (WMV) and fractional anisotropy (FA). After image screening, there were 12 amblyopic participants and 15 normal controls qualified for the VBM analyses. For DTI analysis, 14 amblyopes and 14 controls were included. Compared to the normal controls, reduced GMVs were observed in the left inferior occipital gyrus, the bilateral parahippocampal gyrus and the left supramarginal/postcentral gyrus in the monocular amblyopic group, with the lingual gyrus presenting augmented GMV. Meanwhile, WMVs reduced in the left calcarine, the bilateral inferior frontal and the right precuneus areas, and growth in the WMVs was seen in the right cuneus, right middle occipital and left orbital frontal areas. Diminished FA values in optic radiation and increased FA in the left middle occipital area and right precuneus were detected in amblyopic patients. In monocular amblyopia, cortices related to spatial vision underwent volume loss, which provided neuroanatomical evidence of stereoscopic defects. Additionally, white matter development was also hindered due to visual defects in amblyopes. Growth in the GMVs, WMVs and FA in the occipital lobe and precuneus may reflect a compensation effect by the unaffected eye in monocular amblyopia.

  17. The Future Is 3D

    Science.gov (United States)

    Carter, Luke

    2015-01-01

    3D printers are a way of producing a 3D model of an item from a digital file. The model builds up in successive layers of material placed by the printer controlled by the information in the computer file. In this article the author argues that 3D printers are one of the greatest technological advances of recent times. He discusses practical uses…

  18. The 3D additivist cookbook

    NARCIS (Netherlands)

    Allahyari, Morehshin; Rourke, Daniel; Rasch, Miriam

    The 3D Additivist Cookbook, devised and edited by Morehshin Allahyari & Daniel Rourke, is a free compendium of imaginative, provocative works from over 100 world-leading artists, activists and theorists. The 3D Additivist Cookbook contains .obj and .stl files for the 3D printer, as well as critical

  19. Geopressure and Trap Integrity Predictions from 3-D Seismic Data: Case Study of the Greater Ughelli Depobelt, Niger Delta Pressions de pores et prévisions de l’intégrité des couvertures à partir de données sismiques 3D : le cas du grand sous-bassin d’Ughelli, Delta du Niger

    Directory of Open Access Journals (Sweden)

    Opara A.I.

    2012-05-01

    Full Text Available The deep drilling campaign in the Niger Delta has demonstrated the need for a detailed geopressure and trap integrity (drilling margin analysis as an integral and required step in prospect appraisal. Pre-drill pore pressure prediction from 3-D seismic data was carried out in the Greater Ughelli depobelt, Niger Delta basin to predict subsurface pressure regimes and further applied in the determination of hydrocarbon column height, reservoir continuity, fault seal and trap integrity. Results revealed that geopressured sedimentary formations are common within the more prolific deeper hydrocarbon reserves in the Niger Delta basin. The depth to top of mild geopressure (0.60 psi/ft ranges from about 10 000 ftss to over 30 000 ftss. The distribution of geopressures shows a well defined trend with depth to top of geopressures increasing towards the central part of the basin. This variation in the depth of top of geopressures in the area is believed to be related to faulting and shale diapirism, with top of geopressures becoming shallow with shale diapirism and deep with sedimentation. Post-depositional faulting is believed to have controlled the configuration of the geopressure surface and has played later roles in modifying the present day depth to top of geopressures. In general, geopressure in this area is often associated with simple rollover structures bounded by growth faults, especially at the hanging walls, while hydrostatic pressures were observed in areas with k-faults and collapsed crested structures. Les campagnes de forages profonds dans le delta du Niger ont démontré la nécessité d’une analyse détaillée des surpressions et de l’intégrité des structures pour évaluer correctement les prospects. La prédiction des pressions interstitielles a pu être réalisée ici avant forage à partir de données sismiques 3-D du grand sous-bassin d’Ughelli, dans le delta du Niger. Ce travail a permis de prévoir les régimes de pression du

  20. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  1. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    Science.gov (United States)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  2. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities

    OpenAIRE

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    Purpose: To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Methods: Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of ...

  3. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  4. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  5. 3D Spectroscopy in Astronomy

    Science.gov (United States)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  6. 3D Elevation Program—Virtual USA in 3D

    Science.gov (United States)

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  7. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  8. 3D IBFV : hardware-accelerated 3D flow visualization

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, van J.J.

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique presented by van Wijk (2001) for 2D flow visualization in two main directions. First, we decompose the 3D

  9. 3D for Graphic Designers

    CERN Document Server

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani

  10. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen

    2005-01-01

    to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  11. Qademah Fault 3D Survey

    KAUST Repository

    Hanafy, Sherif M.

    2014-01-01

    Objective: Collect 3D seismic data at Qademah Fault location to 1. 3D traveltime tomography 2. 3D surface wave migration 3. 3D phase velocity 4. Possible reflection processing Acquisition Date: 26 – 28 September 2014 Acquisition Team: Sherif, Kai, Mrinal, Bowen, Ahmed Acquisition Layout: We used 288 receiver arranged in 12 parallel lines, each line has 24 receiver. Inline offset is 5 m and crossline offset is 10 m. One shot is fired at each receiver location. We use the 40 kgm weight drop as seismic source, with 8 to 15 stacks at each shot location.

  12. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  13. 3-D printers for libraries

    CERN Document Server

    Griffey, Jason

    2014-01-01

    As the maker movement continues to grow and 3-D printers become more affordable, an expanding group of hobbyists is keen to explore this new technology. In the time-honored tradition of introducing new technologies, many libraries are considering purchasing a 3-D printer. Jason Griffey, an early enthusiast of 3-D printing, has researched the marketplace and seen several systems first hand at the Consumer Electronics Show. In this report he introduces readers to the 3-D printing marketplace, covering such topics asHow fused deposition modeling (FDM) printing workBasic terminology such as build

  14. Global Value Chains from a 3D Printing Perspective

    DEFF Research Database (Denmark)

    Laplume, André O; Petersen, Bent; Pearce, Joshua M.

    2016-01-01

    This article outlines the evolution of additive manufacturing technology, culminating in 3D printing and presents a vision of how this evolution is affecting existing global value chains (GVCs) in production. In particular, we bring up questions about how this new technology can affect...... of whether in some industries diffusion of 3D printing technologies may change the role of multinational enterprises as coordinators of GVCs by inducing the engagement of a wider variety of firms, even households....

  15. 3D Reconstruction of NMR Images by LabVIEW

    Directory of Open Access Journals (Sweden)

    Peter IZAK

    2007-01-01

    Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.

  16. Applications of 2D to 3D conversion for educational purposes

    Science.gov (United States)

    Koido, Yoshihisa; Morikawa, Hiroyuki; Shiraishi, Saki; Takeuchi, Soya; Maruyama, Wataru; Nakagori, Toshio; Hirakata, Masataka; Shinkai, Hirohisa; Kawai, Takashi

    2013-03-01

    There are three main approaches creating stereoscopic S3D content: stereo filming using two cameras, stereo rendering of 3D computer graphics, and 2D to S3D conversion by adding binocular information to 2D material images. Although manual "off-line" conversion can control the amount of parallax flexibly, 2D material images are converted according to monocular information in most cases, and the flexibility of 2D to S3D conversion has not been exploited. If the depth is expressed flexibly, comprehensions and interests from converted S3D contents are anticipated to be differed from those from 2D. Therefore, in this study we created new S3D content for education by applying 2D to S3D conversion. For surgical education, we created S3D surgical operation content under a surgeon using a partial 2D to S3D conversion technique which was expected to concentrate viewers' attention on significant areas. And for art education, we converted Ukiyoe prints; traditional Japanese artworks made from a woodcut. The conversion of this content, which has little depth information, into S3D, is expected to produce different cognitive processes from those evoked by 2D content, e.g., the excitation of interest, and the understanding of spatial information. In addition, the effects of the representation of these contents were investigated.

  17. Binocular contrast discrimination needs monocular multiplicative noise

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M.

    2016-01-01

    The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms. PMID:26982370

  18. Abusir 3D survey 2015

    Directory of Open Access Journals (Sweden)

    Yukinori Kawae

    2016-12-01

    Full Text Available In 2015, in collaboration with the Czech Institute of Egyptology, we, a Japanese consortium, initiated the Abusir 3D Survey (A-3DS for the 3D documentation of the site’s pyramids, which have not been updated since the time of the architectural investigations of Vito Maragioglio and Celeste Rinaldi in the 1960s to the 1970s. The first season of our project focused on the exterior of Neferirkare’s pyramid, the largest pyramid at Abusir. By developing a strategic mathematical 3D survey plan, step-by-step 3D documentation to suit specific archaeological needs, and producing a new display method for the 3D data, we successfully measured the dimensions of the pyramid in a cost-effective way.

  19. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... a channel limited 2-D transducer array and the conventional 3-D beamforming technique, Parallel Beamforming. The first part of the scientific contributions demonstrate that 3-D synthetic aperture imaging achieves a better image quality than the Parallel Beamforming technique. Data were obtained using both...

  20. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... are (vx, vy, vz) = (-0.03, 95, 1.0) ± (9, 6, 1) cm/s compared with the expected (0, 96, 0) cm/s. Afterwards, 3D vector flow images from a cross-sectional plane of the vessel are presented. The out of plane velocities exhibit the expected 2D circular-symmetric parabolic shape. The experimental results...... verify that the 3D TO method estimates the complete 3D velocity vectors, and that the method is suitable for 3D vector flow imaging....

  1. 3D printing in dentistry.

    Science.gov (United States)

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  2. E3D, 3-D Elastic Seismic Wave Propagation Code

    International Nuclear Information System (INIS)

    Larsen, S.; Harris, D.; Schultz, C.; Maddix, D.; Bakowsky, T.; Bent, L.

    2004-01-01

    1 - Description of program or function: E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output. 2 - Methods: The software simulates wave propagation by solving the elasto-dynamic formulation of the full wave equation on a staggered grid. The solution scheme is 4-order accurate in space, 2-order accurate in time

  3. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  4. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin-Chun Piao

    2017-11-01

    Full Text Available Simultaneous localization and mapping (SLAM is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  5. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  6. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    Science.gov (United States)

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  7. 3D Printing: Print the future of ophthalmology.

    Science.gov (United States)

    Huang, Wenbin; Zhang, Xiulan

    2014-08-26

    The three-dimensional (3D) printer is a new technology that creates physical objects from digital files. Recent technological advances in 3D printing have resulted in increased use of this technology in the medical field, where it is beginning to revolutionize medical and surgical possibilities. It is already providing medicine with powerful tools that facilitate education, surgical planning, and organ transplantation research. A good understanding of this technology will be beneficial to ophthalmologists. The potential applications of 3D printing in ophthalmology, both current and future, are explored in this article. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  8. Parallel Processor for 3D Recovery from Optical Flow

    Directory of Open Access Journals (Sweden)

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  9. 3D laser imaging for ODOT interstate network at true 1-mm resolution.

    Science.gov (United States)

    2014-12-01

    With the development of 3D laser imaging technology, the latest iteration of : PaveVision3D Ultra can obtain true 1mm resolution 3D data at full-lane coverage in all : three directions at highway speed up to 60MPH. This project provides rapid survey ...

  10. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  11. Handbook of 3D integration

    CERN Document Server

    Garrou , Philip; Ramm , Peter

    2014-01-01

    Edited by key figures in 3D integration and written by top authors from high-tech companies and renowned research institutions, this book covers the intricate details of 3D process technology.As such, the main focus is on silicon via formation, bonding and debonding, thinning, via reveal and backside processing, both from a technological and a materials science perspective. The last part of the book is concerned with assessing and enhancing the reliability of the 3D integrated devices, which is a prerequisite for the large-scale implementation of this emerging technology. Invaluable reading fo

  12. Binocular vision in amblyopia: structure, suppression and plasticity.

    Science.gov (United States)

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel H

    2014-03-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cortex and, on the basis of initial data, appear to improve both binocular and monocular visual function, even in adults with amblyopia. The aim of this review is to provide an overview of recent studies that have investigated the structure, measurement and treatment of binocular vision in observers with strabismic, anisometropic and mixed amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  13. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  14. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  15. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... development. Should parents be concerned? If a healthy child consistently develops headaches or tired eyes or cannot clearly see the images when using 3-D digital products, this may indicate a vision or eye ... that the child be given a comprehensive exam by an ophthalmologist. ...

  16. A multimodal 3D framework for fire characteristics estimation

    Science.gov (United States)

    Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.

    2018-02-01

    In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.

  17. 3D Models of Immunotherapy

    Science.gov (United States)

    This collaborative grant is developing 3D models of both mouse and human biology to investigate aspects of therapeutic vaccination in order to answer key questions relevant to human cancer immunotherapy.

  18. AI 3D Cybug Gaming

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this short paper I briefly discuss 3D war Game based on artificial intelligence concepts called AI WAR. Going in to the details, I present the importance of CAICL language and how this language is used in AI WAR. Moreover I also present a designed and implemented 3D War Cybug for AI WAR using CAICL and discus the implemented strategy to defeat its enemies during the game life.

  19. 3D Face Apperance Model

    DEFF Research Database (Denmark)

    Lading, Brian; Larsen, Rasmus; Astrom, K

    2006-01-01

    We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations......We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations...

  20. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  1. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  2. Monocular pedestrian detection: Survey and experiments

    NARCIS (Netherlands)

    Enzweiler, M.; Gavrila, D.M.

    2009-01-01

    Pedestrian detection is a rapidly evolving area in computer vision with key applications in intelligent vehicles, surveillance, and advanced robotics. The objective of this paper is to provide an overview of the current state of the art from both methodological and experimental perspectives. The

  3. 3D accelerator magnet calculations using MAGNUS-3D

    International Nuclear Information System (INIS)

    Pissanetzky, S.; Miao, Y.

    1989-01-01

    The steady trend towards increased magnetic and geometric complexity in the design of accelerator magnets has caused a need for reliable 3D computer models and a better understanding of the behavior of magnetic system in three dimensions. The capabilities of the MAGNUS-3D family of programs are ideally suited to solve this class of problems and provide insight into 3D effects. MAGNUS-3D can solve any problem of magnetostatics involving permanent magnets, nonlinear ferromagnetic materials and electric conductors. MAGNUS-3D uses the finite element method and the two-scalar-potentials formulation of Maxwell's equations to obtain the solution, which can then be used interactively to obtain tables of field components at specific points or lines, plots of field lines, function graphs representing a field component plotted against a coordinate along any line in space (such as the beam line), and views of the conductors, the mesh and the magnetic bodies. The magnetic quantities that can be calculated include the force or torque on conductors or magnetic parts, the energy, the flux through a specified surface, line integrals of any field component along any line in space, and the average field or potential harmonic coefficients. We describe the programs with emphasis placed on their use for accelerator magnet design, and present an advanced example of actual calculations. (orig.)

  4. Monocular channels have a functional role in endogenous orienting.

    Science.gov (United States)

    Saban, William; Sekely, Liora; Klein, Raymond M; Gabay, Shai

    2018-03-01

    The literature has long emphasized the role of higher cortical structures in endogenous orienting. Based on evolutionary explanation and previous data, we explored the possibility that lower monocular channels may also have a functional role in endogenous orienting of attention. Sensitive behavioral manipulation was used to probe the contribution of monocularly segregated regions in a simple cue - target detection task. A central spatially informative cue, and its ensuing target, were presented to the same or different eyes at varying cue-target intervals. Results indicated that the onset of endogenous orienting was apparent earlier when the cue and target were presented to the same eye. The data provides converging evidence for the notion that endogenous facilitation is modulated by monocular portions of the visual stream. This, in turn, suggests that higher cortical mechanisms are not exclusively responsible for endogenous orienting, and that a dynamic interaction between higher and lower neural levels, might be involved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry

    Science.gov (United States)

    Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq

    2018-01-01

    3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.

  6. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  7. From 3D view to 3D print

    Science.gov (United States)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  8. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  9. 3D imaging, 3D printing and 3D virtual planning in endodontics.

    Science.gov (United States)

    Shah, Pratik; Chong, B S

    2018-03-01

    The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.

  10. Stereo Vision and 3D Reconstruction on a Processor Network

    NARCIS (Netherlands)

    Paar, G.; Kuijpers, N.H.L.; Gasser, C.

    1996-01-01

    Surface measurements during outdòoor construction processes ar very costly whenever the measurement process interferes with the construction activities, since machine and man power resources are idle during the data acquisition procedure. Using frame cameras as sensors to provide a rneasurement data

  11. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  12. 77 FR 75494 - Qualification of Drivers; Exemption Applications; Vision

    Science.gov (United States)

    2012-12-20

    ... Multiple Regression Analysis of a Poisson Process,'' Journal of American Statistical Association, June 1971... 14 applicants' case histories. The 14 individuals applied for exemptions from the vision requirement... apply the principle to monocular drivers, because data from the Federal Highway Administration's (FHWA...

  13. Materialedreven 3d digital formgivning

    DEFF Research Database (Denmark)

    Hansen, Flemming Tvede

    2010-01-01

    Formålet med forskningsprojektet er for det første at understøtte keramikeren i at arbejde eksperimenterende med digital formgivning, og for det andet at bidrage til en tværfaglig diskurs om brugen af digital formgivning. Forskningsprojektet fokuserer på 3d formgivning og derved på 3d digital...... formgivning og Rapid Prototyping (RP). RP er en fællesbetegnelse for en række af de teknikker, der muliggør at overføre den digitale form til 3d fysisk form. Forskningsprojektet koncentrerer sig om to overordnede forskningsspørgsmål. Det første handler om, hvordan viden og erfaring indenfor det keramiske...... fagområde kan blive udnyttet i forhold til 3d digital formgivning. Det andet handler om, hvad en sådan tilgang kan bidrage med, og hvordan den kan blive udnyttet i et dynamisk samspil med det keramiske materiale i formgivningen af 3d keramiske artefakter. Materialedreven formgivning er karakteriseret af en...

  14. 3D future internet media

    CERN Document Server

    Dagiuklas, Tasos

    2014-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The main contributions are based on the results of the FP7 European Projects ROMEO, which focus on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the Future Internet (www.ict-romeo.eu). The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of constant video quality to both fixed and mobile users. ROMEO will design and develop hybrid-networking solutions that co...

  15. Novel 3D media technologies

    CERN Document Server

    Dagiuklas, Tasos

    2015-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcas...

  16. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  17. A hand-held 3D laser scanning with global positioning system of subvoxel precision

    International Nuclear Information System (INIS)

    Arias, Nestor; Meneses, Nestor; Meneses, Jaime; Gharbi, Tijani

    2011-01-01

    In this paper we propose a hand-held 3D laser scanner composed of an optical head device to extract 3D local surface information and a stereo vision system with subvoxel precision to measure the position and orientation of the 3D optical head. The optical head is manually scanned over the surface object by the operator. The orientation and position of the 3D optical head is determined by a phase-sensitive method using a 2D regular intensity pattern. This phase reference pattern is rigidly fixed to the optical head and allows their 3D location with subvoxel precision in the observation field of the stereo vision system. The 3D resolution achieved by the stereo vision system is about 33 microns at 1.8 m with an observation field of 60cm x 60cm.

  18. Optics, illumination, and image sensing for machine vision II

    International Nuclear Information System (INIS)

    Svetkoff, D.J.

    1987-01-01

    These proceedings collect papers on the general subject of machine vision. Topics include illumination and viewing systems, x-ray imaging, automatic SMT inspection with x-ray vision, and 3-D sensing for machine vision

  19. Modification of 3D milling machine to 3D printer

    OpenAIRE

    Taska, Abraham

    2014-01-01

    Tato práce se zabývá přestavbou gravírovací frézky na 3D tiskárnu. V první části se práce zabývá možnými technologiemi 3D tisku a možností jejich využití u přestavby. Dále jsou popsány a vybrány vhodné součásti pro přestavbu. V další části je realizováno řízení ohřevu podložky, trysky a řízení posuvu drátu pomocí softwaru TwinCat od společnosti Beckhoff na průmyslovém počítači. Výsledkem práce by měla být oživená 3D tiskárna. This thesis deals with rebuilding of engraving machine to 3D pri...

  20. Aspects of defects in 3d-3d correspondence

    International Nuclear Information System (INIS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-01-01

    In this paper we study supersymmetric co-dimension 2 and 4 defects in the compactification of the 6d (2,0) theory of type A_N_−_1 on a 3-manifold M. The so-called 3d-3d correspondence is a relation between complexified Chern-Simons theory (with gauge group SL(N,ℂ)) on M and a 3d N=2 theory T_N[M]. We study this correspondence in the presence of supersymmetric defects, which are knots/links inside the 3-manifold. Our study employs a number of different methods: state-integral models for complex Chern-Simons theory, cluster algebra techniques, domain wall theory T[SU(N)], 5d N=2 SYM, and also supergravity analysis through holography. These methods are complementary and we find agreement between them. In some cases the results lead to highly non-trivial predictions on the partition function. Our discussion includes a general expression for the cluster partition function, which can be used to compute in the presence of maximal and certain class of non-maximal punctures when N>2. We also highlight the non-Abelian description of the 3d N=2T_N[M] theory with defect included, when such a description is available. This paper is a companion to our shorter paper http://dx.doi.org/10.1088/1751-8113/49/30/30LT02, which summarizes our main results.

  1. Transformation of light double cones in the human retina: the origin of trichromatism, of 4D-spatiotemporal vision, and of patchwise 4D Fourier transformation in Talbot imaging

    Science.gov (United States)

    Lauinger, Norbert

    1997-09-01

    The interpretation of the 'inverted' retina of primates as an 'optoretina' (a light cones transforming diffractive cellular 3D-phase grating) integrates the functional, structural, and oscillatory aspects of a cortical layer. It is therefore relevant to consider prenatal developments as a basis of the macro- and micro-geometry of the inner eye. This geometry becomes relevant for the postnatal trichromatic synchrony organization (TSO) as well as the adaptive levels of human vision. It is shown that the functional performances, the trichromatism in photopic vision, the monocular spatiotemporal 3D- and 4D-motion detection, as well as the Fourier optical image transformation with extraction of invariances all become possible. To transform light cones into reciprocal gratings especially the spectral phase conditions in the eikonal of the geometrical optical imaging before the retinal 3D-grating become relevant first, then in the von Laue resp. reciprocal von Laue equation for 3D-grating optics inside the grating and finally in the periodicity of Talbot-2/Fresnel-planes in the near-field behind the grating. It is becoming possible to technically realize -- at least in some specific aspects -- such a cortical optoretina sensor element with its typical hexagonal-concentric structure which leads to these visual functions.

  2. Stereoscopic 3D graphics generation

    Science.gov (United States)

    Li, Zhi; Liu, Jianping; Zan, Y.

    1997-05-01

    Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.

  3. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    , if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom...... hampers the task of real-time processing. In a second study, some of the issue with the 2-D matrix array are solved by introducing a 2-D row-column (RC) addressing array with only 62 + 62 elements. It is investigated both through simulations and via experimental setups in various flow conditions...

  4. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    Science.gov (United States)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  5. 3D Printed Bionic Nanodevices.

    Science.gov (United States)

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  6. 3D Printed Bionic Nanodevices

    Science.gov (United States)

    Kong, Yong Lin; Gupta, Maneesh K.; Johnson, Blake N.; McAlpine, Michael C.

    2016-01-01

    Summary The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and ‘living’ platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with

  7. Ideal 3D asymmetric concentrator

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Botella, Angel [Departamento Fisica Aplicada a los Recursos Naturales, Universidad Politecnica de Madrid, E.T.S.I. de Montes, Ciudad Universitaria s/n, 28040 Madrid (Spain); Fernandez-Balbuena, Antonio Alvarez; Vazquez, Daniel; Bernabeu, Eusebio [Departamento de Optica, Universidad Complutense de Madrid, Fac. CC. Fisicas, Ciudad Universitaria s/n, 28040 Madrid (Spain)

    2009-01-15

    Nonimaging optics is a field devoted to the design of optical components for applications such as solar concentration or illumination. In this field, many different techniques have been used for producing reflective and refractive optical devices, including reverse engineering techniques. In this paper we apply photometric field theory and elliptic ray bundles method to study 3D asymmetric - without rotational or translational symmetry - concentrators, which can be useful components for nontracking solar applications. We study the one-sheet hyperbolic concentrator and we demonstrate its behaviour as ideal 3D asymmetric concentrator. (author)

  8. Markerless 3D Face Tracking

    DEFF Research Database (Denmark)

    Walder, Christian; Breidt, Martin; Bulthoff, Heinrich

    2009-01-01

    We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently...... the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects...

  9. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  10. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    NARCIS (Netherlands)

    Noordam, J.C.; Hemming, J.; Heerde, van C.J.E.; Golbach, F.B.T.F.; Soest, van R.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down

  11. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  12. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  13. 3D Terahertz Beam Profiling

    DEFF Research Database (Denmark)

    Pedersen, Pernille Klarskov; Strikwerda, Andrew; Jepsen, Peter Uhd

    2013-01-01

    We present a characterization of THz beams generated in both a two-color air plasma and in a LiNbO3 crystal. Using a commercial THz camera, we record intensity images as a function of distance through the beam waist, from which we extract 2D beam profiles and visualize our measurements into 3D beam...

  14. 3D Printing: Exploring Capabilities

    Science.gov (United States)

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  15. 3D Pit Stop Printing

    Science.gov (United States)

    Wright, Lael; Shaw, Daniel; Gaidds, Kimberly; Lyman, Gregory; Sorey, Timothy

    2018-01-01

    Although solving an engineering design project problem with limited resources or structural capabilities of materials can be part of the challenge, students making their own parts can support creativity. The authors of this article found an exciting solution: 3D printers are not only one of several tools for making but also facilitate a creative…

  16. 3D histomorphometric quantification from 3D computed tomography

    International Nuclear Information System (INIS)

    Oliveira, L.F. de; Lopes, R.T.

    2004-01-01

    The histomorphometric analysis is based on stereologic concepts and was originally applied to biologic samples. This technique has been used to evaluate different complex structures such as ceramic filters, net structures and cancellous objects that are objects with inner connected structures. The measured histomorphometric parameters of structure are: sample volume to total reconstructed volume (BV/TV), sample surface to sample volume (BS/BV), connection thickness (Tb Th ), connection number (Tb N ) and connection separation (Tb Sp ). The anisotropy was evaluated as well. These parameters constitute the base of histomorphometric analysis. The quantification is realized over cross-sections recovered by cone beam reconstruction, where a real-time microfocus radiographic system is used as tomographic system. The three-dimensional (3D) histomorphometry, obtained from tomography, corresponds to an evolution of conventional method that is based on 2D analysis. It is more coherent with morphologic and topologic context of the sample. This work shows result from 3D histomorphometric quantification to characterize objects examined by 3D computer tomography. The results, which characterizes the internal structures of ceramic foams with different porous density, are compared to results from conventional methods

  17. DYNA3D2000*, Explicit 3-D Hydrodynamic FEM Program

    International Nuclear Information System (INIS)

    Lin, J.

    2002-01-01

    1 - Description of program or function: DYNA3D2000 is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation. 2 - Method of solution: Discretization of a continuous model transforms partial differential equations into algebraic equations. A numerical solution is then obtained by solving these algebraic equations through a direct time marching scheme. 3 - Restrictions on the complexity of the problem: Recent software improvements have eliminated most of the user identified limitations with dynamic memory allocation and a very large format description that has pushed potential problem sizes beyond the reach of most users. The dominant restrictions remain in code execution speed and robustness, which the developers constantly strive to improve

  18. 3-D Discrete Analytical Ridgelet Transform

    OpenAIRE

    Helbert , David; Carré , Philippe; Andrès , Éric

    2006-01-01

    International audience; In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines:...

  19. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  20. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  1. 3D DIGITAL CADASTRE JOURNEY IN VICTORIA, AUSTRALIA

    Directory of Open Access Journals (Sweden)

    D. Shojaei

    2017-10-01

    Full Text Available Land development processes today have an increasing demand to access three-dimensional (3D spatial information. Complex land development may need to have a 3D model and require some functions which are only possible using 3D data. Accordingly, the Intergovernmental Committee on Surveying and Mapping (ICSM, as a national body in Australia provides leadership, coordination and standards for surveying, mapping and national datasets has developed the Cadastre 2034 strategy in 2014. This strategy has a vision to develop a cadastral system that enables people to readily and confidently identify the location and extent of all rights, restrictions and responsibilities related to land and real property. In 2014, the land authority in the state of Victoria, Australia, namely Land Use Victoria (LUV, has entered the challenging area of designing and implementing a 3D digital cadastre focused on providing more efficient and effective services to the land and property industry. LUV has been following the ICSM 2034 strategy which requires developing various policies, standards, infrastructures, and tools. Over the past three years, LUV has mainly focused on investigating the technical aspect of a 3D digital cadastre. This paper provides an overview of the 3D digital cadastre investigation progress in Victoria and discusses the challenges that the team faced during this journey. It also addresses the future path to develop an integrated 3D digital cadastre in Victoria.

  2. 3D integrated superconducting qubits

    Science.gov (United States)

    Rosenberg, D.; Kim, D.; Das, R.; Yost, D.; Gustavsson, S.; Hover, D.; Krantz, P.; Melville, A.; Racz, L.; Samach, G. O.; Weber, S. J.; Yan, F.; Yoder, J. L.; Kerman, A. J.; Oliver, W. D.

    2017-10-01

    As the field of quantum computing advances from the few-qubit stage to larger-scale processors, qubit addressability and extensibility will necessitate the use of 3D integration and packaging. While 3D integration is well-developed for commercial electronics, relatively little work has been performed to determine its compatibility with high-coherence solid-state qubits. Of particular concern, qubit coherence times can be suppressed by the requisite processing steps and close proximity of another chip. In this work, we use a flip-chip process to bond a chip with superconducting flux qubits to another chip containing structures for qubit readout and control. We demonstrate that high qubit coherence (T1, T2,echo > 20 μs) is maintained in a flip-chip geometry in the presence of galvanic, capacitive, and inductive coupling between the chips.

  3. 3D Printed Robotic Hand

    Science.gov (United States)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  4. Mortars for 3D printing

    Directory of Open Access Journals (Sweden)

    Demyanenko Olga

    2018-01-01

    Full Text Available The paper is aimed at developing scientifically proven compositions of mortars for 3D printing modified by a peat-based admixture with improved operational characteristics. The paper outlines the results of experimental research on hardened cement paste and concrete mixture with the use of modifying admixture MT-600 (thermally modified peat. It is found that strength of hardened cement paste increases at early age when using finely dispersed admixtures, which is the key factor for formation of construction and technical specifications of concrete for 3D printing technologies. The composition of new formations of hardened cement paste modified by MT-600 admixture were obtained, which enabled to suggest the possibility of their physico-chemical interaction while hardening.

  5. Automated 3-D Radiation Mapping

    International Nuclear Information System (INIS)

    Tarpinian, J. E.

    1991-01-01

    This work describes an automated radiation detection and imaging system which combines several state-of-the-art technologies to produce a portable but very powerful visualization tool for planning work in radiation environments. The system combines a radiation detection system, a computerized radiation imaging program, and computerized 3-D modeling to automatically locate and measurements are automatically collected and imaging techniques are used to produce colored, 'isodose' images of the measured radiation fields. The isodose lines from the images are then superimposed over the 3-D model of the area. The final display shows the various components in a room and their associated radiation fields. The use of an automated radiation detection system increases the quality of radiation survey obtained measurements. The additional use of a three-dimensional display allows easier visualization of the area and associated radiological conditions than two-dimensional sketches

  6. Forensic 3D Scene Reconstruction

    International Nuclear Information System (INIS)

    LITTLE, CHARLES Q.; PETERS, RALPH R.; RIGDON, J. BRIAN; SMALL, DANIEL E.

    1999-01-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  7. 3D neutron transport modelization

    International Nuclear Information System (INIS)

    Warin, X.

    1996-12-01

    Some nodal methods to solve the transport equation in 3D are presented. Two nodal methods presented at an OCDE congress are described: a first one is a low degree one called RTN0; a second one is a high degree one called BDM1. The two methods can be made faster with a totally consistent DSA. Some results of parallelization show that: 98% of the time is spent in sweeps; transport sweeps are easily parallelized. (K.A.)

  8. 3D Printing A Survey

    Directory of Open Access Journals (Sweden)

    Muhammad Zulkifl Hasan

    2017-08-01

    Full Text Available Solid free fabrication SFF are produced to enhance the printing instrument utilizing distinctive strategies like Piezo spout control multi-spout injet printers or STL arrange utilizing cutting information. The procedure is utilized to diminish the cost and enhance the speed of printing. A few techniques take long at last because of extra process like dry the printing. This study will concentrate on SFFS utilizing UV gum for 3D printing.

  9. 3D neutron transport modelization

    Energy Technology Data Exchange (ETDEWEB)

    Warin, X.

    1996-12-01

    Some nodal methods to solve the transport equation in 3D are presented. Two nodal methods presented at an OCDE congress are described: a first one is a low degree one called RTN0; a second one is a high degree one called BDM1. The two methods can be made faster with a totally consistent DSA. Some results of parallelization show that: 98% of the time is spent in sweeps; transport sweeps are easily parallelized. (K.A.). 10 refs.

  10. Conducting polymer 3D microelectrodes

    DEFF Research Database (Denmark)

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained...... showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared...

  11. Pediatric interventional radiology with 3D rotational angiography

    International Nuclear Information System (INIS)

    Racadio, J.M.

    2004-01-01

    Rotational angiography with three-dimensional reconstruction vastly improves spatial orientation, eliminating guesswork during interventions. The 3D images help to define the anatomy more accurately, particularly in the case of overlapping tortuous anatomy such as that encountered in genitourinary abnormalities. The procedures are performed on a Philips Integris Allura biplane system with two 12'' image intensifiers. Although radiologists are trained to assemble multiple oblique views in their minds, that vision is often hard to convey to a waiting surgeon. The 3D images give a much better impression of the spatial relationships, saving valuable time and giving added security. (orig.)

  12. 3D display considerations for rugged airborne environments

    Science.gov (United States)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  13. [Real time 3D echocardiography

    Science.gov (United States)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  14. 3D treatment planning systems.

    Science.gov (United States)

    Saw, Cheng B; Li, Sicong

    2018-01-01

    Three-dimensional (3D) treatment planning systems have evolved and become crucial components of modern radiation therapy. The systems are computer-aided designing or planning softwares that speed up the treatment planning processes to arrive at the best dose plans for the patients undergoing radiation therapy. Furthermore, the systems provide new technology to solve problems that would not have been considered without the use of computers such as conformal radiation therapy (CRT), intensity-modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). The 3D treatment planning systems vary amongst the vendors and also the dose delivery systems they are designed to support. As such these systems have different planning tools to generate the treatment plans and convert the treatment plans into executable instructions that can be implemented by the dose delivery systems. The rapid advancements in computer technology and accelerators have facilitated constant upgrades and the introduction of different and unique dose delivery systems than the traditional C-arm type medical linear accelerators. The focus of this special issue is to gather relevant 3D treatment planning systems for the radiation oncology community to keep abreast of technology advancement by assess the planning tools available as well as those unique "tricks or tips" used to support the different dose delivery systems. Copyright © 2018 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  15. Compact 3D quantum memory

    Science.gov (United States)

    Xie, Edwar; Deppe, Frank; Renger, Michael; Repp, Daniel; Eder, Peter; Fischer, Michael; Goetz, Jan; Pogorzalek, Stefan; Fedorov, Kirill G.; Marx, Achim; Gross, Rudolf

    2018-05-01

    Superconducting 3D microwave cavities offer state-of-the-art coherence times and a well-controlled environment for superconducting qubits. In order to realize at the same time fast readout and long-lived quantum information storage, one can couple the qubit to both a low-quality readout and a high-quality storage cavity. However, such systems are bulky compared to their less coherent 2D counterparts. A more compact and scalable approach is achieved by making use of the multimode structure of a 3D cavity. In our work, we investigate such a device where a transmon qubit is capacitively coupled to two modes of a single 3D cavity. External coupling is engineered so that the memory mode has an about 100 times larger quality factor than the readout mode. Using an all-microwave second-order protocol, we realize a lifetime enhancement of the stored state over the qubit lifetime by a factor of 6 with a fidelity of approximately 80% determined via quantum process tomography. We also find that this enhancement is not limited by fundamental constraints.

  16. Improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery

    OpenAIRE

    Elliott, D.; Patla, A.; Bullimore, M.

    1997-01-01

    AIMS—To determine the improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery.
METHODS—Clinical vision (monocular and binocular high and low contrast visual acuity, contrast sensitivity, and disability glare), functional vision (face identity and expression recognition, reading speed, word acuity, and mobility orientation), and perceived visual disability (Activities of Daily Vision Scale) were measured in 25 subjects before a...

  17. A novel visual-inertial monocular SLAM

    Science.gov (United States)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  18. 3D composite image, 3D MRI, 3D SPECT, hydrocephalus

    International Nuclear Information System (INIS)

    Mito, T.; Shibata, I.; Sugo, N.; Takano, M.; Takahashi, H.

    2002-01-01

    The three-dimensional (3D)SPECT imaging technique we have studied and published for the past several years is an analytical tool that permits visual expression of the cerebral circulation profile in various cerebral diseases. The greatest drawback of SPECT is that the limitation on precision of spacial resolution makes intracranial localization impossible. In 3D SPECT imaging, intracranial volume and morphology may vary with the threshold established. To solve this problem, we have produced complimentarily combined SPECT and helical-CT 3D images by means of general-purpose visualization software for intracranial localization. In hydrocephalus, however, the key subject to be studied is the profile of cerebral circulation around the ventricles of the brain. This suggests that, for displaying the cerebral ventricles in three dimensions, CT is a difficult technique whereas MRI is more useful. For this reason, we attempted to establish the profile of cerebral circulation around the cerebral ventricles by the production of combined 3D images of SPECT and MRI. In patients who had shunt surgery for hydrocephalus, a difference between pre- and postoperative cerebral circulation profiles was assessed by a voxel distribution curve, 3D SPECT images, and combined 3D SPECT and MRI images. As the shunt system in this study, an Orbis-Sigma valve of the automatic cerebrospinal fluid volume adjustment type was used in place of the variable pressure type Medos valve currently in use, because this device requires frequent changes in pressure and a change in pressure may be detected after MRI procedure. The SPECT apparatus used was PRISM3000 of the three-detector type, and 123I-IMP was used as the radionuclide in a dose of 222 MBq. MRI data were collected with an MAGNEXa+2 with a magnetic flux density of 0.5 tesla under the following conditions: field echo; TR 50 msec; TE, 10 msec; flip, 30ueK; 1 NEX; FOV, 23 cm; 1-mm slices; and gapless. 3D images are produced on the workstation TITAN

  19. 3D silicon strip detectors

    International Nuclear Information System (INIS)

    Parzefall, Ulrich; Bates, Richard; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Eckert, Simon; Eklund, Lars; Fleta, Celeste; Jakobs, Karl; Kuehn, Susanne; Lozano, Manuel; Pahn, Gregor; Parkes, Chris; Pellegrini, Giulio; Pennicard, David; Piemonte, Claudio; Ronchin, Sabina; Szumlak, Tomasz; Zoboli, Andrea; Zorzi, Nicola

    2009-01-01

    While the Large Hadron Collider (LHC) at CERN has started operation in autumn 2008, plans for a luminosity upgrade to the Super-LHC (sLHC) have already been developed for several years. This projected luminosity increase by an order of magnitude gives rise to a challenging radiation environment for tracking detectors at the LHC experiments. Significant improvements in radiation hardness are required with respect to the LHC. Using a strawman layout for the new tracker of the ATLAS experiment as an example, silicon strip detectors (SSDs) with short strips of 2-3 cm length are foreseen to cover the region from 28 to 60 cm distance to the beam. These SSD will be exposed to radiation levels up to 10 15 N eq /cm 2 , which makes radiation resistance a major concern for the upgraded ATLAS tracker. Several approaches to increasing the radiation hardness of silicon detectors exist. In this article, it is proposed to combine the radiation hard 3D-design originally conceived for pixel-style applications with the benefits of the established planar technology for strip detectors by using SSDs that have regularly spaced doped columns extending into the silicon bulk under the detector strips. The first 3D SSDs to become available for testing were made in the Single Type Column (STC) design, a technological simplification of the original 3D design. With such 3D SSDs, a small number of prototype sLHC detector modules with LHC-speed front-end electronics as used in the semiconductor tracking systems of present LHC experiments were built. Modules were tested before and after irradiation to fluences of 10 15 N eq /cm 2 . The tests were performed with three systems: a highly focused IR-laser with 5μm spot size to make position-resolved scans of the charge collection efficiency, an Sr 90 β-source set-up to measure the signal levels for a minimum ionizing particle (MIP), and a beam test with 180 GeV pions at CERN. This article gives a brief overview of the results obtained with 3D-STC-modules.

  20. 3D silicon strip detectors

    Energy Technology Data Exchange (ETDEWEB)

    Parzefall, Ulrich [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany)], E-mail: ulrich.parzefall@physik.uni-freiburg.de; Bates, Richard [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Boscardin, Maurizio [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy); Dalla Betta, Gian-Franco [INFN and Universita' di Trento, via Sommarive 14, 38050 Povo di Trento (Italy); Eckert, Simon [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Eklund, Lars; Fleta, Celeste [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Jakobs, Karl; Kuehn, Susanne [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Lozano, Manuel [Instituto de Microelectronica de Barcelona, IMB-CNM, CSIC, Barcelona (Spain); Pahn, Gregor [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Parkes, Chris [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Pellegrini, Giulio [Instituto de Microelectronica de Barcelona, IMB-CNM, CSIC, Barcelona (Spain); Pennicard, David [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Piemonte, Claudio; Ronchin, Sabina [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy); Szumlak, Tomasz [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Zoboli, Andrea [INFN and Universita' di Trento, via Sommarive 14, 38050 Povo di Trento (Italy); Zorzi, Nicola [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy)

    2009-06-01

    While the Large Hadron Collider (LHC) at CERN has started operation in autumn 2008, plans for a luminosity upgrade to the Super-LHC (sLHC) have already been developed for several years. This projected luminosity increase by an order of magnitude gives rise to a challenging radiation environment for tracking detectors at the LHC experiments. Significant improvements in radiation hardness are required with respect to the LHC. Using a strawman layout for the new tracker of the ATLAS experiment as an example, silicon strip detectors (SSDs) with short strips of 2-3 cm length are foreseen to cover the region from 28 to 60 cm distance to the beam. These SSD will be exposed to radiation levels up to 10{sup 15}N{sub eq}/cm{sup 2}, which makes radiation resistance a major concern for the upgraded ATLAS tracker. Several approaches to increasing the radiation hardness of silicon detectors exist. In this article, it is proposed to combine the radiation hard 3D-design originally conceived for pixel-style applications with the benefits of the established planar technology for strip detectors by using SSDs that have regularly spaced doped columns extending into the silicon bulk under the detector strips. The first 3D SSDs to become available for testing were made in the Single Type Column (STC) design, a technological simplification of the original 3D design. With such 3D SSDs, a small number of prototype sLHC detector modules with LHC-speed front-end electronics as used in the semiconductor tracking systems of present LHC experiments were built. Modules were tested before and after irradiation to fluences of 10{sup 15}N{sub eq}/cm{sup 2}. The tests were performed with three systems: a highly focused IR-laser with 5{mu}m spot size to make position-resolved scans of the charge collection efficiency, an Sr{sup 90}{beta}-source set-up to measure the signal levels for a minimum ionizing particle (MIP), and a beam test with 180 GeV pions at CERN. This article gives a brief overview of

  1. Magmatic Systems in 3-D

    Science.gov (United States)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  2. 3D laparoscopic surgery: a prospective clinical trial.

    Science.gov (United States)

    Agrusa, Antonino; Di Buono, Giuseppe; Buscemi, Salvatore; Cucinella, Gaspare; Romano, Giorgio; Gulotta, Gaspare

    2018-04-03

    Since it's introduction, laparoscopic surgery represented a real revolution in clinical practice. The use of a new generation three-dimensional (3D) HD laparoscopic system can be considered a favorable "hybrid" made by combining two different elements: feasibility and diffusion of laparoscopy and improved quality of vision. In this study we report our clinical experience with use of three-dimensional (3D) HD vision system for laparoscopic surgery. Between 2013 and 2017 a prospective cohort study was conducted at the University Hospital of Palermo. We considered 163 patients underwent to laparoscopic three-dimensional (3D) HD surgery for various indications. This 3D-group was compared to a retrospective-prospective control group of patients who underwent the same surgical procedures. Considerating specific surgical procedures there is no significant difference in term of age and gender. The analysis of all the groups of diseases shows that the laparoscopic procedures performed with 3D technology have a shorter mean operative time than comparable 2D procedures when we consider surgery that require complex tasks. The use of 3D laparoscopic technology is an extraordinary innovation in clinical practice, but the instrumentation is still not widespread. Precisely for this reason the studies in literature are few and mainly limited to the evaluation of the surgical skills to the simulator. This study aims to evaluate the actual benefits of the 3D laparoscopic system integrating it in clinical practice. The three-dimensional view allows advanced performance in particular conditions, such as small and deep spaces and promotes performing complex surgical laparoscopic procedures.

  3. 3-D Mapping Technologies For High Level Waste Tanks

    International Nuclear Information System (INIS)

    Marzolf, A.; Folsom, M.

    2010-01-01

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  4. Wireless 3D Chocolate Printer

    Directory of Open Access Journals (Sweden)

    FROILAN G. DESTREZA

    2014-02-01

    Full Text Available This study is for the BSHRM Students of Batangas State University (BatStateU ARASOF for the researchers believe that the Wireless 3D Chocolate Printer would be helpful in their degree program especially on making creative, artistic, personalized and decorative chocolate designs. The researchers used the Prototyping model as procedural method for the successful development and implementation of the hardware and software. This method has five phases which are the following: quick plan, quick design, prototype construction, delivery and feedback and communication. This study was evaluated by the BSHRM Students and the assessment of the respondents regarding the software and hardware application are all excellent in terms of Accuracy, Effecitveness, Efficiency, Maintainability, Reliability and User-friendliness. Also, the overall level of acceptability of the design project as evaluated by the respondents is excellent. With regard to the observation about the best raw material to use in 3D printing, the chocolate is good to use as the printed material is slightly distorted,durable and very easy to prepare; the icing is also good to use as the printed material is not distorted and is very durable but consumes time to prepare; the flour is not good as the printed material is distorted, not durable but it is easy to prepare. The computation of the economic viability level of 3d printer with reference to ROI is 37.14%. The recommendation of the researchers in the design project are as follows: adding a cooling system so that the raw material will be more durable, development of a more simplified version and improving the extrusion process wherein the user do not need to stop the printing process just to replace the empty syringe with a new one.

  5. Interactive 3D Mars Visualization

    Science.gov (United States)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  6. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  7. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  8. Analysis of 3-D images

    Science.gov (United States)

    Wani, M. Arif; Batchelor, Bruce G.

    1992-03-01

    Deriving generalized representation of 3-D objects for analysis and recognition is a very difficult task. Three types of representations based on type of an object is used in this paper. Objects which have well-defined geometrical shapes are segmented by using a fast edge region based segmentation technique. The segmented image is represented by plan and elevation of each part of the object if the object parts are symmetrical about their central axis. The plan and elevation concept enables representing and analyzing such objects quickly and efficiently. The second type of representation is used for objects having parts which are not symmetrical about their central axis. The segmented surface patches of such objects are represented by the 3-D boundary and the surface features of each segmented surface. Finally, the third type of representation is used for objects which don't have well-defined geometrical shapes (for example a loaf of bread). These objects are represented and analyzed from its features which are derived using a multiscale contour based technique. Anisotropic Gaussian smoothing technique is introduced to segment the contours at various scales of smoothing. A new merging technique is used which enables getting the current best estimate of break points at each scale. This new technique enables elimination of loss of accuracy of localization effects at coarser scales without using scale space tracking approach.

  9. 3D Printed Bionic Ears

    Science.gov (United States)

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  10. 3D DNA Origami Crystals.

    Science.gov (United States)

    Zhang, Tao; Hartl, Caroline; Frank, Kilian; Heuer-Jungemann, Amelie; Fischer, Stefan; Nickels, Philipp C; Nickel, Bert; Liedl, Tim

    2018-05-18

    3D crystals assembled entirely from DNA provide a route to design materials on a molecular level and to arrange guest particles in predefined lattices. This requires design schemes that provide high rigidity and sufficiently large open guest space. A DNA-origami-based "tensegrity triangle" structure that assembles into a 3D rhombohedral crystalline lattice with an open structure in which 90% of the volume is empty space is presented here. Site-specific placement of gold nanoparticles within the lattice demonstrates that these crystals are spacious enough to efficiently host 20 nm particles in a cavity size of 1.83 × 10 5 nm 3 , which would also suffice to accommodate ribosome-sized macromolecules. The accurate assembly of the DNA origami lattice itself, as well as the precise incorporation of gold particles, is validated by electron microscopy and small-angle X-ray scattering experiments. The results show that it is possible to create DNA building blocks that assemble into lattices with customized geometry. Site-specific hosting of nano objects in the optically transparent DNA lattice sets the stage for metamaterial and structural biology applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. 3D printed bionic ears.

    Science.gov (United States)

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  12. Overview of fast algorithm in 3D dynamic holographic display

    Science.gov (United States)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  13. RELAP5-3D User Problems

    International Nuclear Information System (INIS)

    Riemke, Richard Allan

    2001-01-01

    The Reactor Excursion and Leak Analysis Program with 3D capability (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics and 3D neutron kinetics. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution

  14. LOTT RANCH 3D PROJECT

    International Nuclear Information System (INIS)

    Larry Lawrence; Bruce Miller

    2004-01-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  15. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  16. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities.

    Science.gov (United States)

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of the retina were taken and imported into FIJI software for analysis. Neural retinal cell densities of deprived eyes were reduced along with increasing period of deprivation. The percentage of reductions were 60.9% (P < 0.001), 41.6% (P = 0.003), and 18.9% (P = 0.326) for ganglion, inner nuclear, and outer nuclear cells, respectively. In non-deprived eyes, cell densities in contrast were increased by 116% (P < 0.001), 52% (P < 0.001) and 59.6% (P < 0.001) in ganglion, inner nuclear, and outer nuclear cells, respectively. In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  17. Functional vision loss: a diagnosis of exclusion.

    Science.gov (United States)

    Villegas, Rex B; Ilsen, Pauline F

    2007-10-01

    Most cases of visual acuity or visual field loss can be attributed to ocular pathology or ocular manifestations of systemic pathology. They can also occasionally be attributed to nonpathologic processes or malingering. Functional vision loss is any decrease in vision the origin of which cannot be attributed to a pathologic or structural abnormality. Two cases of functional vision loss are described. In the first, a 58-year-old man presented for a baseline eye examination for enrollment in a vision rehabilitation program. He reported bilateral blindness since a motor vehicle accident with head trauma 4 years prior. Entering visual acuity was "no light perception" in each eye. Ocular health examination was normal and the patient made frequent eye contact with the examiners. He was referred for neuroimaging and electrophysiologic testing. The second case was a 49-year-old man who presented with a long history of intermittent monocular diplopia. His medical history was significant for psycho-medical evaluations and a diagnosis of factitious disorder. Entering uncorrected visual acuities were 20/20 in each eye, but visual field testing found constriction. No abnormalities were found that could account for the monocular diplopia or visual field deficit. A diagnosis of functional vision loss secondary to factitious disorder was made. Functional vision loss is a diagnosis of exclusion. In the event of reduced vision in the context of a normal ocular health examination, all other pathology must be ruled out before making the diagnosis of functional vision loss. Evaluation must include auxiliary ophthalmologic testing, neuroimaging of the visual pathway, review of the medical history and lifestyle, and psychiatric evaluation. Comanagement with a psychiatrist is essential for patients with functional vision loss.

  18. A laminar cortical model of stereopsis and 3D surface perception: closure and da Vinci stereopsis.

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen

    2005-01-01

    A laminar cortical model of stereopsis and 3D surface perception is developed and simulated. The model describes how monocular and binocular oriented filtering interact with later stages of 3D boundary formation and surface filling-in in the LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model includes two main new developments: (1) It clarifies how surface-to-boundary feedback from V2 thin stripes to pale stripes helps to explain data about stereopsis. This feedback has previously been used to explain data about 3D figure-ground perception. (2) It proposes that the binocular false match problem is subsumed under the Gestalt grouping problem. In particular, the disparity filter, which helps to solve the correspondence problem by eliminating false matches, is realized using inhibitory interneurons as part of the perceptual grouping process by horizontal connections in layer 2/3 of cortical area V2. The enhanced model explains all the psychophysical data previously simulated by Grossberg and Howe (2003), such as contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, Panum's limiting case, the Venetian blind illusion, stereopsis with polarity-reversed stereograms, and da Vinci stereopsis. It also explains psychophysical data about perceptual closure and variations of da Vinci stereopsis that previous models cannot yet explain.

  19. 3D biometrics systems and applications

    CERN Document Server

    Zhang, David

    2013-01-01

    Includes discussions on popular 3D imaging technologies, combines them with biometric applications, and then presents real 3D biometric systems Introduces many efficient 3D feature extraction, matching, and fusion algorithms Techniques presented have been supported by experimental results using various 3D biometric classifications

  20. Telerobotics and 3-d TV

    International Nuclear Information System (INIS)

    Able, E.

    1990-01-01

    This paper reports on the development of telerobotic techniques that can be used in the nuclear industry. The approach has been to apply available equipment, modify available equipment, or design and build anew. The authors have successfully built an input controller which can be used with standard industrial robots, converting them into telerobots. A clean room industrial robot has been re-engineered into an advanced telerobot engineered for the nuclear industry, using a knowledge of radiation tolerance design principles and collaboration with the manufacturer. A powerful hydraulic manipulator has been built to respond to a need for more heavy duty devices for in-cell handling. A variety of easy to use 3-D TV systems has been developed

  1. Conducting Polymer 3D Microelectrodes

    Directory of Open Access Journals (Sweden)

    Jenny Emnéus

    2010-12-01

    Full Text Available Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements.

  2. Embedding complex objects with 3d printing

    KAUST Repository

    Hussain, Muhammad Mustafa

    2017-10-12

    A CMOS technology-compatible fabrication process for flexible CMOS electronics embedded during additive manufacturing (i.e. 3D printing). A method for such a process may include printing a first portion of a 3D structure; pausing the step of printing the 3D structure to embed the flexible silicon substrate; placing the flexible silicon substrate in a cavity of the first portion of the 3D structure to embed the flexible silicon substrate in the 3D structure; and resuming the step of printing the 3D structure to form the second portion of the 3D structure.

  3. Supernova Remnant in 3-D

    Science.gov (United States)

    2009-01-01

    of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through. The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave. This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron. High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these structures, but their orientation and

  4. Natural fibre composites for 3D Printing

    OpenAIRE

    Pandey, Kapil

    2015-01-01

    3D printing has been common option for prototyping. Not all the materials are suitable for 3D printing. Various studies have been done and still many are ongoing regarding the suitability of the materials for 3D printing. This thesis work discloses the possibility of 3D printing of certain polymer composite materials. The main objective of this thesis work was to study the possibility for 3D printing the polymer composite material composed of natural fibre composite and various different ...

  5. 3-D discrete analytical ridgelet transform.

    Science.gov (United States)

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  6. ORMGEN3D, 3-D Crack Geometry FEM Mesh Generator

    International Nuclear Information System (INIS)

    Bass, B.R.; Bryson, J.W.

    1994-01-01

    1 - Description of program or function: ORMGEN3D is a finite element mesh generator for computational fracture mechanics analysis. The program automatically generates a three-dimensional finite element model for six different crack geometries. These geometries include flat plates with straight or curved surface cracks and cylinders with part-through cracks on the outer or inner surface. Mathematical or user-defined crack shapes may be considered. The curved cracks may be semicircular, semi-elliptical, or user-defined. A cladding option is available that allows for either an embedded or penetrating crack in the clad material. 2 - Method of solution: In general, one eighth or one-quarter of the structure is modelled depending on the configuration or option selected. The program generates a core of special wedge or collapsed prism elements at the crack front to introduce the appropriate stress singularity at the crack tip. The remainder of the structure is modelled with conventional 20-node iso-parametric brick elements. Element group I of the finite element model consists of an inner core of special crack tip elements surrounding the crack front enclosed by a single layer of conventional brick elements. Eight element divisions are used in a plane orthogonal to the crack front, while the number of element divisions along the arc length of the crack front is user-specified. The remaining conventional brick elements of the model constitute element group II. 3 - Restrictions on the complexity of the problem: Maxima of 5,500 nodes, 4 layers of clad elements

  7. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  8. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  9. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    Science.gov (United States)

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  10. Vrste i tehnike 3D modeliranja

    OpenAIRE

    Bernik, Andrija

    2010-01-01

    Proces stvaranja 3D stvarnih ili imaginarnih objekata naziva se 3D modeliranje. Razvoj računalne tehnologije omogućuje korisniku odabir raznih metoda i tehnika kako bi se postigla optimalna učinkovitost. Odabir je vezan za klasično 3D modeliranje ili 3D skeniranje pomoću specijaliziranih programskih i sklopovskih rješenja. 3D tehnikama modeliranja korisnik može izraditi 3D model na nekoliko načina: koristi poligone, krivulje ili hibrid dviju spomenutih tehnika pod nazivom subdivizijsko modeli...

  11. Kuvaus 3D-tulostamisesta hammastekniikassa

    OpenAIRE

    Munne, Mauri; Mustonen, Tuomas; Vähäjylkkä, Jaakko

    2013-01-01

    3D-tulostaminen kehittyy nopeasti ja yleistyy koko ajan. Tulostimien tarkkuuksien kehittyessä 3D-tulostus on ottamassa myös jalansijaa hammastekniikan alalta. Tämän opinnäytetyön tarkoituksena on kuvata 3D-tulostamisen tilaa hammastekniikassa. 3D-tulostaminen on Suomessa vielä melko harvinaista, joten opinnäytetyön tavoitteena on koota yhteen kaikki mahdollinen tieto liittyen 3D-tulostamiseen hammastekniikassa. Tavoitteena on myös 3D-tulostimen testaaminen käytännössä aina suun skannaami...

  12. NIF Ignition Target 3D Point Design

    Energy Technology Data Exchange (ETDEWEB)

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  13. Magma emplacement in 3D

    Science.gov (United States)

    Gorczyk, W.; Vogt, K.

    2017-12-01

    Magma intrusion is a major material transfer process in Earth's continental crust. Yet, the mechanical behavior of the intruding magma and its host are a matter of debate. In this study, we present a series of numerical thermo-mechanical experiments on mafic magma emplacement in 3D.In our model, we place the magmatic source region (40 km diameter) at the base of the mantle lithosphere and connect it to the crust by a 3 km wide channel, which may have evolved at early stages of magmatism during rapid ascent of hot magmatic fluids/melts. Our results demonstrate continental crustal response due to magma intrusion. We observe change in intrusion geometries between dikes, cone-sheets, sills, plutons, ponds, funnels, finger-shaped and stock-like intrusions as well as injection time. The rheology and temperature of the host-rock are the main controlling factors in the transition between these different modes of intrusion. Viscous deformation in the warm and deep crust favours host rock displacement and magma pools along the crust-mantle boundary forming deep-seated plutons or magma ponds in the lower to middle-crust. Brittle deformation in the cool and shallow crust induces cone-shaped fractures in the host rock and enables emplacement of finger- or stock-like intrusions at shallow or intermediate depth. A combination of viscous and brittle deformation forms funnel-shaped intrusions in the middle-crust. Low-density source magma results in T-shaped intrusions in cross-section with magma sheets at the surface.

  14. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    Science.gov (United States)

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. © 2015 American Academy of Forensic Sciences.

  15. 3D panorama stereo visual perception centering on the observers

    International Nuclear Information System (INIS)

    Tang, YiPing; Zhou, Jingkai; Xu, Haitao; Xiang, Yun

    2015-01-01

    For existing three-dimensional (3D) laser scanners, acquiring geometry and color information of the objects simultaneously is difficult. Moreover, the current techniques cannot store, modify, and model the point clouds efficiently. In this work, we have developed a novel sensor system, which is called active stereo omni-directional vision sensor (ASODVS), to address those problems. ASODVS is an integrated system composed of a single-view omni-directional vision sensor and a mobile planar green laser generator platform. Driven by a stepper motor, the laser platform can move vertically along the axis of the ASODVS. During the scanning of the laser generators, the panoramic images of the environment are captured and the characteristics and space location information of the laser points are calculated accordingly. Based on the image information of the laser points, the 3D space can be reconstructed. Experimental results demonstrate that the proposed ASODVS system can measure and reconstruct the 3D space in real-time and with high quality. (paper)

  16. Low Vision

    Science.gov (United States)

    ... USAJobs Home » Statistics and Data » Low Vision Listen Low Vision Low Vision Defined: Low Vision is defined as the best- ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  17. Will 3D printers manufacture your meals?

    NARCIS (Netherlands)

    Bommel, K.J.C. van

    2013-01-01

    These days, 3D printers are laying down plastics, metals, resins, and other materials in whatever configurations creative people can dream up. But when the next 3D printing revolution comes, you'll be able to eat it.

  18. Eesti 3D jaoks kitsas / Virge Haavasalu

    Index Scriptorium Estoniae

    Haavasalu, Virge

    2009-01-01

    Produktsioonifirma Digitaalne Sputnik: Kaur ja Kaspar Kallas tegelevad filmide produtseerimise ning 3D digitaalkaamerate tootearendusega (Silicon Imaging LLC). Vendade Kallaste 3D-kaamerast. Kommenteerib Eesti Filmi Sihtasutuse direktor Marge Liiske

  19. 3D-Printed Millimeter Wave Structures

    Science.gov (United States)

    2016-03-14

    demonstrates the resolution of the printer with a 10 micron nozzle. Figure 2: Measured loss tangent of SEBS and SBS samples. 3D - Printed Millimeter... 3D printing of styrene-butadiene-styrene (SBS) and styrene ethylene/butylene-styrene (SEBS) is used to demonstrate the feasibility of 3D - printed ...Additionally, a dielectric lens is printed which improves the antenna gain of an open-ended WR-28 waveguide from 7 to 8.5 dBi. Keywords: 3D printing

  20. Digital Dentistry — 3D Printing Applications

    OpenAIRE

    Zaharia Cristian; Gabor Alin-Gabriel; Gavrilovici Andrei; Stan Adrian Tudor; Idorasi Laura; Sinescu Cosmin; Negruțiu Meda-Lavinia

    2017-01-01

    Three-dimensional (3D) printing is an additive manufacturing method in which a 3D item is formed by laying down successive layers of material. 3D printers are machines that produce representations of objects either planned with a CAD program or scanned with a 3D scanner. Printing is a method for replicating text and pictures, typically with ink on paper. We can print different dental pieces using different methods such as selective laser sintering (SLS), stereolithography, fused deposition mo...

  1. Detectors in 3D available for assessment

    CERN Document Server

    Re, Valerio

    2014-01-01

    This deliverable reports on 3D devices resulting from the vertical integration of pixel sensors and readout electronics. After 3D integration steps such as etching of through-silicon vias and backside metallization of readout integrated circuits, ASICs and sensors are interconnected to form a 3D pixel detector. Various 3D detectors have been devised in AIDA WP3 and their status and performance is assessed here.

  2. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  3. Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy

    Science.gov (United States)

    Westwood, D.A.; Robertson, C.; Heath, M.

    2005-01-01

    Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…

  4. 3D modelling for multipurpose cadastre

    NARCIS (Netherlands)

    Abduhl Rahman, A.; Van Oosterom, P.J.M.; Hua, T.C.; Sharkawi, K.H.; Duncan, E.E.; Azri, N.; Hassan, M.I.

    2012-01-01

    Three-dimensional (3D) modelling of cadastral objects (such as legal spaces around buildings, around utility networks and other spaces) is one of the important aspects for a multipurpose cadastre (MPC). This paper describes the 3D modelling of the objects for MPC and its usage to the knowledge of 3D

  5. Expanding Geometry Understanding with 3D Printing

    Science.gov (United States)

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  6. 3D Characterization of Recrystallization Boundaries

    DEFF Research Database (Denmark)

    Zhang, Yubin; Godfrey, Andrew William; MacDonald, A. Nicole

    2016-01-01

    A three-dimensional (3D) volume containing a recrystallizing grain and a deformed matrix in a partially recrystallized pure aluminum was characterized using the 3D electron backscattering diffraction technique. The 3D shape of a recrystallizing boundary, separating the recrystallizing grain...... on the formation of protrusions/retrusions....

  7. 3D-Printable Antimicrobial Composite Resins

    NARCIS (Netherlands)

    Yue, Jun; Zhao, Pei; Gerasimov, Jennifer Y.; van de Lagemaat, Marieke; Grotenhuis, Arjen; Rustema-Abbing, Minie; van der Mei, Henny C.; Busscher, Henk J.; Herrmann, Andreas; Ren, Yijin

    2015-01-01

    3D printing is seen as a game-changing manufacturing process in many domains, including general medicine and dentistry, but the integration of more complex functions into 3D-printed materials remains lacking. Here, it is expanded on the repertoire of 3D-printable materials to include antimicrobial

  8. Monocular Depth Perception and Robotic Grasping of Novel Objects

    Science.gov (United States)

    2009-06-01

    obtain its full 3D shape, and applies even to textureless, translucent or reflective objects on which standard stereo 3D reconstruction fares poorly. We...purple) in image A. 3.3.4 Phantom planes This cue enforces occlusion constraints across multiple cameras. Concretely , each small plane (superpixel...needing to obtain its full 3D shape, and applies even to textureless, translucent or reflective objects on which standard stereo 3D reconstruction

  9. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  10. Wafer level 3-D ICs process technology

    CERN Document Server

    Tan, Chuan Seng; Reif, L Rafael

    2009-01-01

    This book focuses on foundry-based process technology that enables the fabrication of 3-D ICs. The core of the book discusses the technology platform for pre-packaging wafer lever 3-D ICs. However, this book does not include a detailed discussion of 3-D ICs design and 3-D packaging. This is an edited book based on chapters contributed by various experts in the field of wafer-level 3-D ICs process technology. They are from academia, research labs and industry.

  11. 3D Printing of Fluid Flow Structures

    OpenAIRE

    Taira, Kunihiko; Sun, Yiyang; Canuto, Daniel

    2017-01-01

    We discuss the use of 3D printing to physically visualize (materialize) fluid flow structures. Such 3D models can serve as a refreshing hands-on means to gain deeper physical insights into the formation of complex coherent structures in fluid flows. In this short paper, we present a general procedure for taking 3D flow field data and producing a file format that can be supplied to a 3D printer, with two examples of 3D printed flow structures. A sample code to perform this process is also prov...

  12. The Esri 3D city information model

    International Nuclear Information System (INIS)

    Reitz, T; Schubiger-Banz, S

    2014-01-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases

  13. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  14. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    Science.gov (United States)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  15. RELAP5-3D User Problems

    Energy Technology Data Exchange (ETDEWEB)

    Riemke, Richard Allan

    2002-09-01

    The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.

  16. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  17. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  18. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  19. PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    E. Maset

    2017-08-01

    Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  20. 3D exploitation of large urban photo archives

    Science.gov (United States)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  1. Structured Light-Based 3D Reconstruction System for Plants.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  2. Virtual reality and 3D animation in forensic visualization.

    Science.gov (United States)

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  3. 3D optical measuring technologies for dimensional inspection

    International Nuclear Information System (INIS)

    Chugui, Yu V

    2005-01-01

    The results of the R and D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method, development of hole inspection method on the base of diffractive optical elements. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability takes a noncontact inspection of geometrical parameters of their components. For this tasks we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFILE, and technologies for non-contact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic system COMPLEX for noncontact inspection of geometrical parameters of running freight car wheel pairs. The performances of these systems and the results of the industrial testing at atomic and railway companies are presented

  4. Recent advances in 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Alavi, Zahrasadat; Owen, Heather A; Yu, Zeyun

    2015-11-01

    The scanning electron microscope (SEM), as one of the most commonly used instruments in biology and material sciences, employs electrons instead of light to determine the surface properties of specimens. However, the SEM micrographs still remain 2D images. To effectively measure and visualize the surface attributes, we need to restore the 3D shape model from the SEM images. 3D surface reconstruction is a longstanding topic in microscopy vision as it offers quantitative and visual information for a variety of applications consisting medicine, pharmacology, chemistry, and mechanics. In this paper, we attempt to explain the expanding body of the work in this area, including a discussion of recent techniques and algorithms. With the present work, we also enhance the reliability, accuracy, and speed of 3D SEM surface reconstruction by designing and developing an optimized multi-view framework. We then consider several real-world experiments as well as synthetic data to examine the qualitative and quantitative attributes of our proposed framework. Furthermore, we present a taxonomy of 3D SEM surface reconstruction approaches and address several challenging issues as part of our future work. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Structured Light-Based 3D Reconstruction System for Plants

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces and software algorithms (including the proposed 3D point cloud registration and plant feature measurement. This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  6. Global Value Chains from a 3D Printing Perspective

    DEFF Research Database (Denmark)

    Laplume, André O; Petersen, Bent; Pearce, Joshua M.

    2016-01-01

    This article outlines the evolution of additive manufacturing technology, culminating in 3D printing and presents a vision of how this evolution is affecting existing global value chains (GVCs) in production. In particular, we bring up questions about how this new technology can affect...... the geographic span and density of GVCs. Potentially, wider adoption of this technology has the potential to partially reverse the trend towards global specialization of production systems into elements that may be geographically dispersed and closer to the end users (localization). This leaves the question...

  7. Virtual reality 3D headset based on DMD light modulators

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  8. Comparative evaluation of HD 2D/3D laparoscopic monitors and benchmarking to a theoretically ideal 3D pseudodisplay: even well-experienced laparoscopists perform better with 3D.

    Science.gov (United States)

    Wilhelm, D; Reiser, S; Kohn, N; Witte, M; Leiner, U; Mühlbach, L; Ruschin, D; Reiner, W; Feussner, H

    2014-08-01

    Though theoretically superior to standard 2D visualization, 3D video systems have not yet achieved a breakthrough in laparoscopy. The latest 3D monitors, including autostereoscopic displays and high-definition (HD) resolution, are designed to overcome the existing limitations. We performed a randomized study on 48 individuals with different experience levels in laparoscopy. Three different 3D displays (glasses-based 3D monitor, autostereoscopic display, and a mirror-based theoretically ideal 3D display) were compared to a 2D HD display by assessing multiple performance and mental workload parameters and rating the subjects during a laparoscopic suturing task. Electromagnetic tracking provided information on the instruments' pathlength, movement velocity, and economy. The usability, the perception of visual discomfort, and the quality of image transmission of each monitor were subjectively rated. Almost all performance parameters were superior with the conventional glasses-based 3D display compared to the 2D display and the autostereoscopic display, but were often significantly exceeded by the mirror-based 3D display. Subjects performed a task faster and with greater precision when visualization was achieved with the 3D and the mirror-based display. Instrument pathlength was shortened by improved depth perception. Workload parameters (NASA TLX) did not show significant differences. Test persons complained of impaired vision while using the autostereoscopic monitor. The 3D and 2D displays were rated user-friendly and applicable in daily work. Experienced and inexperienced laparoscopists profited equally from using a 3D display, with an improvement in task performance about 20%. Novel 3D displays improve laparoscopic interventions as a result of faster performance and higher precision without causing a higher mental workload. Therefore, they have the potential to significantly impact the further development of minimally invasive surgery. However, as shown by the

  9. Quantum vision in three dimensions

    Science.gov (United States)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  10. Identification of the transition arrays 3d74s-3d74p in Br X and 3d64s-3d64p in Br XI

    International Nuclear Information System (INIS)

    Zeng, X.T.; Jupen, C.; Bengtsson, P.; Engstroem, L.; Westerlind, M.; Martinson, I.

    1991-01-01

    We report a beam-foil study of multiply ionized bromine in the region 400-1300A, performed with 6 and 8 MeV Br ions from a tandem accelerator. At these energies transitions belonging to Fe-like Br X and Mn-like Br XI are expected to be prominent. We have identified 31 lines as 3d 7 4s-3d 7 4p transitions in Br X, from which 16 levels of the previously unknown 3d 7 4s configuration could be established. We have also added 6 new 3d 7 4p levels to the 99 previously known. For Br XI we have classified 9 lines as 3d 6 4s-3d 6 4p combinations. The line identifications have been corroborated by isoelectronic comparisons and theoretical calculations using the superposition-of-configurations technique. (orig.)

  11. 3D PHOTOGRAPHS IN CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    W. Schuhr

    2013-07-01

    Full Text Available This paper on providing "oo-information" (= objective object-information on cultural monuments and sites, based on 3D photographs is also a contribution of CIPA task group 3 to the 2013 CIPA Symposium in Strasbourg. To stimulate the interest in 3D photography for scientists as well as for amateurs, 3D-Masterpieces are presented. Exemplary it is shown, due to their high documentary value ("near reality", 3D photography support, e.g. the recording, the visualization, the interpretation, the preservation and the restoration of architectural and archaeological objects. This also includes samples for excavation documentation, 3D coordinate calculation, 3D photographs applied for virtual museum purposes and as educational tools. In addition 3D photography is used for virtual museum purposes, as well as an educational tool and for spatial structure enhancement, which in particular holds for inscriptions and in rock arts. This paper is also an invitation to participate in a systematic survey on existing international archives of 3D photographs. In this respect it is also reported on first results, to define an optimum digitization rate for analog stereo views. It is more than overdue, in addition to the access to international archives for 3D photography, the available 3D photography data should appear in a global GIS(cloud-system, like on, e.g., google earth. This contribution also deals with exposing new 3D photographs to document monuments of importance for Cultural Heritage, including the use of 3D and single lense cameras from a 10m telescope staff, to be used for extremely low earth based airborne 3D photography, as well as for "underwater staff photography". In addition it is reported on the use of captive balloon and drone platforms for 3D photography in Cultural Heritage. It is liked to emphasize, the still underestimated 3D effect on real objects even allows, e.g., the spatial perception of extremely small scratches as well as of nuances in

  12. 3D Systems” ‘Stuck in the Middle’ of the 3D Printer Boom?

    NARCIS (Netherlands)

    A. Hoffmann (Alan)

    2014-01-01

    textabstract3D Systems, the pioneer of 3D printing, predicted a future where "kids from 8 to 80" could design and print their ideas at home. By 2013, 9 years after the creation of the first working 3D printer, there were more than 30 major 3D printing companies competing for market share. 3DS and

  13. Prevalence of color vision deficiency among arc welders.

    Science.gov (United States)

    Heydarian, Samira; Mahjoob, Monireh; Gholami, Ahmad; Veysi, Sajjad; Mohammadi, Morteza

    This study was performed to investigate whether occupationally related color vision deficiency can occur from welding. A total of 50 male welders, who had been working as welders for at least 4 years, were randomly selected as case group, and 50 age matched non-welder men, who lived in the same area, were regarded as control group. Color vision was assessed using the Lanthony desatured panel D-15 test. The test was performed under the daylight fluorescent lamp with a spectral distribution of energy with a color temperature of 6500K and a color rendering index of 94 that provided 1000lx on the work plane. The test was carried out monocularly and no time limit was imposed. All data analysis were performed using SPSS, version 22. The prevalence of dyschromatopsia among welders was 15% which was statistically higher than that of nonwelder group (2%) (p=0.001). Among welders with dyschromatopsia, color vision deficiency in 72.7% of cases was monocular. There was positive relationship between the employment length and color vision loss (p=0.04). Similarly, a significant correlation was found between the prevalence of color vision deficiency and average working hours of welding a day (p=0.025). Chronic exposure to welding light may cause color vision deficiency. The damage depends on the exposure duration and the length of their employment as welders. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  14. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  15. Remote Collaborative 3D Printing - Process Investigation

    Science.gov (United States)

    2016-04-01

    COLLABORATIVE 3D PRINTING - PROCESS INVESTIGATION Cody M. Reese, PE CAD MODEL PRINT MODEL PRINT PREVIEW PRINTED PART AERIAL VIRTUAL This...REMOTE COLLABORATIVE 3D PRINTING - PROCESS INVESTIGATION 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Cody M. Reese...release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Remote Collaborative 3D Printing project is a collaboration between

  16. Microfabricating 3D Structures by Laser Origami

    Science.gov (United States)

    2011-11-09

    10.1117/2.1201111.003952 Microfabricating 3D structures by laser origami Alberto Piqué, Scott Mathews, Andrew Birnbaum, and Nicholas Charipar A new...folding known as origami allows the transformation of flat patterns into 3D shapes. A similar approach can be used to generate 3D structures com... geometries . The overarching challenge is to move away from traditional planar semiconductor photolitho- graphic techniques, which severely limit the type of

  17. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  18. 3D images and expert system

    International Nuclear Information System (INIS)

    Hasegawa, Jun-ichi

    1998-01-01

    This paper presents an expert system called 3D-IMPRESS for supporting applications of three dimensional (3D) image processing. This system can automatically construct a 3D image processing procedure based on a pictorial example of the goal given by a user. In the paper, to evaluate the performance of the system, it was applied to construction of procedures for extracting specific component figures from practical chest X-ray CT images. (author)

  19. ERP system for 3D printing industry

    Directory of Open Access Journals (Sweden)

    Deaky Bogdan

    2017-01-01

    Full Text Available GOCREATE is an original cloud-based production management and optimization service which helps 3D printing service providers to use their resources better. The proposed Enterprise Resource Planning system can significantly increase income through improved productivity. With GOCREATE, the 3D printing service providers get a much higher production efficiency at a much lower licensing cost, to increase their competitiveness in the fast growing 3D printing market.

  20. Getting started in 3D with Maya

    CERN Document Server

    Watkins, Adam

    2012-01-01

    Deliver professional-level 3D content in no time with this comprehensive guide to 3D animation with Maya. With over 12 years of training experience, plus several award winning students under his belt, author Adam Watkins is the ideal mentor to get you up to speed with 3D in Maya. Using a structured and pragmatic approach Getting Started in 3D with Maya begins with basic theory of fundamental techniques, then builds on this knowledge using practical examples and projects to put your new skills to the test. Prepared so that you can learn in an organic fashion, each chapter builds on the know

  1. Illustrating Mathematics using 3D Printers

    OpenAIRE

    Knill, Oliver; Slavkovsky, Elizabeth

    2013-01-01

    3D printing technology can help to visualize proofs in mathematics. In this document we aim to illustrate how 3D printing can help to visualize concepts and mathematical proofs. As already known to educators in ancient Greece, models allow to bring mathematics closer to the public. The new 3D printing technology makes the realization of such tools more accessible than ever. This is an updated version of a paper included in book Low-Cost 3D Printing for science, education and Sustainable Devel...

  2. A 3d game in python

    OpenAIRE

    Xu, Minghui

    2014-01-01

    3D game has widely been accepted and loved by many game players. More and more different kinds of 3D games were developed to feed people’s needs. The most common programming language for development of 3D game is C++ nowadays. Python is a high-level scripting language. It is simple and clear. The concise syntax could speed up the development cycle. This project was to develop a 3D game using only Python. The game is about how a cat lives in the street. In order to live, the player need...

  3. Dimensional accuracy of 3D printed vertebra

    Science.gov (United States)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  4. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  5. Fault-tolerant 3D Mapping with Application to an Orchard Robot

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens; Rusu, Radu Bogan

    2009-01-01

    In this paper we present a geometric reasoning method for dealing with noise as well as faults present in 3D depth maps. These maps are acquired using stereo-vision sensors, but our framework makes no assumption about the origin of the underlying data. The method is based on observations made on ...... of comprehensive 3D maps for an agricultural robot operating in an orchard....

  6. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... of visual deprivation has a substantial impact on experience-dependent plasticity of the human visual cortex.......The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex...

  7. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  8. Localisation accuracy of semi-dense monocular SLAM

    Science.gov (United States)

    Schreve, Kristiaan; du Plessies, Pieter G.; Rätsch, Matthias

    2017-06-01

    Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.

  9. Monocular oral reading after treatment of dense congenital unilateral cataract

    Science.gov (United States)

    Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.

    2010-01-01

    Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057

  10. Linear study and bundle adjustment data fusion; Application to vision localization

    International Nuclear Information System (INIS)

    Michot, J.

    2010-01-01

    The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the re-projection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts: on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to

  11. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    Science.gov (United States)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    . The 3D model integrated into a GIS is now a precious means of communication for the valuation of the site. Accessible to all, including to the distant people, he allows discover the castle and his history in an educational and relevant way. From an archaeological point of view, the 3D model brings an overall view and a backward movement on the constitution of the site, which a 2D document cannot easily offer. The 3D navigation and the integration of 2D data in the model allow analyze vestiges in another way, contributing to the faster establishment of new hypotheses. Complementary to other methods already exploited in archaeology, the analysis by the 3D vision is, for the scientists, a significant saving of time which they can so dedicate to the more thorough study of certain put aside hypotheses. In parallel, we created several panoramas, and set up a virtual and interactive visit of the site. In the optics to perpetuate this project, and to offer to the future users the ways to continue and to update this study, we tested and set up the methodologies of processing. We were so able to release procedures clear, orderly and applicable as well to the case of Engelbourg as to other similar studies. At least, some hypotheses permits to reconstruct virtually first versions of the original state of the castle.

  12. A comparison of low-cost monocular vision techniques for pothole distance estimation

    CSIR Research Space (South Africa)

    Nienaber, S

    2015-12-01

    Full Text Available measurement setup. Consequently, the camera was placed on a tripod at the exact height it would have been in the vehicle. The images used for this study were captured by a GoPro Hero 3+ camera with the resolution set to 3680 x 2760. The high resolution...

  13. Towards sustainable and clean 3D Geoinformation

    NARCIS (Netherlands)

    Stoter, J.E.; Ledoux, H.; Zlatanova, S.; Biljecki, F.; Kolbe, T.H.; Bill, R.; Donaubauer, A.

    2016-01-01

    This paper summarises the on going research activities of the 3D Geoinformation Group at the Delft University of Technology. The main challenge underpinning the research of this group is providing clean and appropriate 3D data about our environment in order to serve a wide variety of applications.

  14. Pattern recognition: invariants in 3D

    International Nuclear Information System (INIS)

    Proriol, J.

    1992-01-01

    In e + e - events, the jets have a spherical 3D symmetry. A set of invariants are defined for 3D objects with a spherical symmetry. These new invariants are used to tag the number of jets in e + e - events. (K.A.) 3 refs

  15. 3D Printing: What Are the Hazards?

    Science.gov (United States)

    Randolph, Susan A

    2018-03-01

    As the popularity of three-dimensional (3D) printers increases, more research will be conducted to evaluate the benefits and risks of this technology. Occupational health professionals should stay abreast of new recommendations to protect workers from exposure to 3D printer emissions.

  16. Illustrating the disassembly of 3D models

    KAUST Repository

    Guo, Jianwei; Yan, Dongming; Li, Er; Dong, Weiming; Wonka, Peter; Zhang, Xiaopeng

    2013-01-01

    We present a framework for the automatic disassembly of 3D man-made models and the illustration of the disassembly process. Given an assembled 3D model, we first analyze the individual parts using sharp edge loops and extract the contact faces

  17. 3D, or Not to Be?

    Science.gov (United States)

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  18. Embedding complex objects with 3d printing

    KAUST Repository

    Hussain, Muhammad Mustafa; Diaz, Cordero Marlon Steven

    2017-01-01

    A CMOS technology-compatible fabrication process for flexible CMOS electronics embedded during additive manufacturing (i.e. 3D printing). A method for such a process may include printing a first portion of a 3D structure; pausing the step

  19. 3D Printing of Molecular Models

    Science.gov (United States)

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  20. 3D printing of functional structures

    NARCIS (Netherlands)

    Krijnen, Gijsbertus J.M.

    The technology colloquial known as ‘3D printing’ has developed in such diversity in printing technologies and application fields that meanwhile it seems anything is possible. However, clearly the ideal 3D Printer, with high resolution, multi-material capability, fast printing, etc. is yet to be

  1. 3D Printing. What's the Harm?

    Science.gov (United States)

    Love, Tyler S.; Roy, Ken

    2016-01-01

    Health concerns from 3D printing were first documented by Stephens, Azimi, Orch, and Ramos (2013), who found that commercially available 3D printers were producing hazardous levels of ultrafine particles (UFPs) and volatile organic compounds (VOCs) when plastic materials were melted through the extruder. UFPs are particles less than 100 nanometers…

  2. 3D Printed Block Copolymer Nanostructures

    Science.gov (United States)

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  3. 3D-printed cereal foods

    NARCIS (Netherlands)

    Noort, M.; Bommel, K. van; Renzetti, S.

    2017-01-01

    Additive manufacturing, also known as 3D printing, is an up-and-coming production technology based on layer-by-layer deposition of material to reproduce a computer-generated 3D design. Additive manufacturing is a collective term used for a variety of technologies, such as fused deposition modeling

  4. A Framework for 3d Printing

    DEFF Research Database (Denmark)

    Pilkington, Alan; Frandsen, Thomas; Kapetaniou, Chrystalla

    3D printing technologies and processes offer such a radical range of options for firms that we currently lack a structured way of recording possible impact and recommending actions for managers. The changes arising from 3d printing includes more than just new options for product design, but also...

  5. The 3D-city model

    DEFF Research Database (Denmark)

    Holmgren, Steen; Rüdiger, Bjarne; Tournay, Bruno

    2001-01-01

    We have worked with the construction and use of 3D city models for about ten years. This work has given us valuable experience concerning model methodology. In addition to this collection of knowledge, our perception of the concept of city models has changed radically. In order to explain...... of 3D city models....

  6. 3D Programmable Micro Self Assembly

    National Research Council Canada - National Science Library

    Bohringer, Karl F; Parviz, Babak A; Klavins, Eric

    2005-01-01

    .... We have developed a "self assembly tool box" consisting of a range of methods for micro-scale self-assembly in 2D and 3D We have shown physical demonstrations of simple 3D self-assemblies which lead...

  7. Wow! 3D Content Awakens the Classroom

    Science.gov (United States)

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  8. Digital Dentistry — 3D Printing Applications

    Directory of Open Access Journals (Sweden)

    Zaharia Cristian

    2017-03-01

    Full Text Available Three-dimensional (3D printing is an additive manufacturing method in which a 3D item is formed by laying down successive layers of material. 3D printers are machines that produce representations of objects either planned with a CAD program or scanned with a 3D scanner. Printing is a method for replicating text and pictures, typically with ink on paper. We can print different dental pieces using different methods such as selective laser sintering (SLS, stereolithography, fused deposition modeling, and laminated object manufacturing. The materials are certified for printing individual impression trays, orthodontic models, gingiva mask, and different prosthetic objects. The material can reach a flexural strength of more than 80 MPa. 3D printing takes the effectiveness of digital projects to the production phase. Dental laboratories are able to produce crowns, bridges, stone models, and various orthodontic appliances by methods that combine oral scanning, 3D printing, and CAD/CAM design. Modern 3D printing has been used for the development of prototypes for several years, and it has begun to find its use in the world of manufacturing. Digital technology and 3D printing have significantly elevated the rate of success in dental implantology using custom surgical guides and improving the quality and accuracy of dental work.

  9. Case study of 3D fingerprints applications.

    Directory of Open Access Journals (Sweden)

    Feng Liu

    Full Text Available Human fingers are 3D objects. More information will be provided if three dimensional (3D fingerprints are available compared with two dimensional (2D fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.

  10. Immersive 3D Geovisualization in Higher Education

    Science.gov (United States)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  11. LandSIM3D: modellazione in real time 3D di dati geografici

    Directory of Open Access Journals (Sweden)

    Lambo Srl Lambo Srl

    2009-03-01

    Full Text Available LandSIM3D: realtime 3D modelling of geographic data LandSIM3D allows to model in 3D an existing landscape in a few hours only and geo-referenced offering great landscape analysis and understanding tools. 3D projects can then be inserted into the existing landscape with ease and precision. The project alternatives and impact can then be visualized and studied into their immediate environmental. The complex evolution of the landscape in the future can also be simulated and the landscape model can be manipulated interactively and better shared with colleagues. For that reason, LandSIM3D is different from traditional 3D imagery solutions, normally reserved for computer graphics experts. For more information about LandSIM3D, go to www.landsim3d.com.

  12. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  13. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  14. Inclined nanoimprinting lithography for 3D nanopatterning

    International Nuclear Information System (INIS)

    Liu Zhan; Bucknall, David G; Allen, Mark G

    2011-01-01

    We report a non-conventional shear-force-driven nanofabrication approach, inclined nanoimprint lithography (INIL), for producing 3D nanostructures of varying heights on planar substrates in a single imprinting step. Such 3D nanostructures are fabricated by exploiting polymer anisotropic dewetting where the degree of anisotropy can be controlled by the magnitude of the inclination angle. The feature size is reduced from micron scale of the template to a resultant nanoscale pattern. The underlying INIL mechanism is investigated both experimentally and theoretically. The results indicate that the shear force generated at a non-zero inclination angle induced by the INIL apparatus essentially leads to asymmetry in the polymer flow direction ultimately resulting in 3D nanopatterns with different heights. INIL removes the requirements in conventional nanolithography of either utilizing 3D templates or using multiple lithographic steps. This technique enables various 3D nanoscale devices including angle-resolved photonic and plasmonic crystals to be fabricated.

  15. Density-Based 3D Shape Descriptors

    Directory of Open Access Journals (Sweden)

    Schmitt Francis

    2007-01-01

    Full Text Available We propose a novel probabilistic framework for the extraction of density-based 3D shape descriptors using kernel density estimation. Our descriptors are derived from the probability density functions (pdf of local surface features characterizing the 3D object geometry. Assuming that the shape of the 3D object is represented as a mesh consisting of triangles with arbitrary size and shape, we provide efficient means to approximate the moments of geometric features on a triangle basis. Our framework produces a number of 3D shape descriptors that prove to be quite discriminative in retrieval applications. We test our descriptors and compare them with several other histogram-based methods on two 3D model databases, Princeton Shape Benchmark and Sculpteur, which are fundamentally different in semantic content and mesh quality. Experimental results show that our methodology not only improves the performance of existing descriptors, but also provides a rigorous framework to advance and to test new ones.

  16. 3D-grafiikka ja pelimoottorit

    OpenAIRE

    Sillanpää, Otto

    2014-01-01

    Tässä opinnäytetyössä tutkitaan miten 3D-mallit saadaan sellaiseen muotoon, että ne olisivat käytettävissä eri pelimoottoreissa. Tutkimuksen tarkoituksena on selvittää, miten luodaan 3D-malleja pelimoottoreihin, sekä miten 3D-mallinnusohjelmat ja pelimoottorit eroavat toisistaan, kun käsitellään 3D-malleja. Tässä työssä pelimoottoreina toimivat Valven Source sekä Epic Gamesin Unreal Engine 3. 3D-mallinnusohjelmista käytössä olivat Autodeskin 3ds Max 2014 ja Blender Foundationin Blender 2.7...

  17. BEAMS3D Neutral Beam Injection Model

    Energy Technology Data Exchange (ETDEWEB)

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  18. Fabrication of 3D Silicon Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; /SINTEF, Oslo; Kenney, C.; Hasi, J.; /SLAC; Da Via, C.; /Manchester U.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  19. Maintaining and troubleshooting your 3D printer

    CERN Document Server

    Bell, Charles

    2014-01-01

    Maintaining and Troubleshooting Your 3D Printer by Charles Bell is your guide to keeping your 3D printer running through preventive maintenance, repair, and diagnosing and solving problems in 3D printing. If you've bought or built a 3D printer such as a MakerBot only to be confounded by jagged edges, corner lift, top layers that aren't solid, or any of a myriad of other problems that plague 3D printer enthusiasts, then here is the book to help you get past all that and recapture the joy of creative fabrication. The book also includes valuable tips for builders and those who want to modify the

  20. The psychology of the 3D experience

    Science.gov (United States)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  1. 3D Visualization Development of SIUE Campus

    Science.gov (United States)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  2. Pathways for Learning from 3D Technology

    Science.gov (United States)

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2016-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D presentations could provide additional sensorial cues (e.g., depth cues) that lead to a higher sense of being surrounded by the stimulus; a connection through general interest such that 3D presentation increases a viewer’s interest that leads to greater attention paid to the stimulus (e.g., "involvement"); and a connection through discomfort, with the 3D goggles causing discomfort that interferes with involvement and thus with memory. The memories of 396 participants who viewed two-dimensional (2D) or 3D movies at movie theaters in Southern California were tested. Within three days of viewing a movie, participants filled out an online anonymous questionnaire that queried them about their movie content memories, subjective movie-going experiences (including emotional reactions and "presence") and demographic backgrounds. The responses to the questionnaire were subjected to path analyses in which several different links between 3D presentation to memory (and other variables) were explored. The results showed there were no effects of 3D presentation, either directly or indirectly, upon memory. However, the largest effects of 3D presentation were on emotions and immersion, with 3D presentation leading to reduced positive emotions, increased negative emotions and lowered immersion, compared to 2D presentations. PMID:28078331

  3. VITOM 3D: Preliminary Experience in Cranial Surgery.

    Science.gov (United States)

    Rossini, Zefferino; Cardia, Andrea; Milani, Davide; Lasio, Giovanni Battista; Fornari, Maurizio; D'Angelo, Vincenzo

    2017-11-01

    Optimal vision and ergonomics are important factors contributing to achievement of good results during neurosurgical interventions. The operating microscope and the endoscope have partially filled the gap between the need for good surgical vision and maintenance of a comfortable posture during surgery. Recently, a new technology called video-assisted telescope operating monitor or exoscope has been used in cranial surgery. The main drawback with previous prototypes was lack of stereopsis. We present the first case report of cranial surgery performed using the VITOM 3D, an exoscope conjugating 4K resolution view and three-dimensional technology, and discuss advantages and disadvantages compared with the operating microscope. A 50-year-old patient with vertigo and headache linked to a petrous ridge meningioma underwent surgery using the VITOM 3D. Complete removal of the tumor and resolution of symptoms were achieved. The telescope was maintained over the surgical field for the duration of the procedure; a video monitor was placed at 2 m from the surgeons; and a control unit allowed focusing, magnification, and repositioning of the camera. VITOM 3D is a video system that has overcome the lack of stereopsis, a major drawback of previous exoscope models. It has many advantages regarding ergonomics, versatility, and depth of field compared with the operating microscope, but the holder arm and the mechanism of repositioning, refocusing, and magnification need to be ameliorated. Surgeons should continue to use the technology they feel confident with, unless a distinct advantage with newer technologies can be demonstrated. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  5. 3D Laser Scanner for Underwater Manipulation

    Directory of Open Access Journals (Sweden)

    Albert Palomer

    2018-04-01

    Full Text Available Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS is used to autonomously grasp an object from the bottom of a water tank.

  6. 3D Laser Scanner for Underwater Manipulation.

    Science.gov (United States)

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  7. Medical 3D Printing for the Radiologist

    Science.gov (United States)

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  8. 3D bioprinting of tissues and organs.

    Science.gov (United States)

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology.

  9. Medical 3D Printing for the Radiologist.

    Science.gov (United States)

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. (©)RSNA, 2015.

  10. Network level pavement evaluation with 1 mm 3D survey system

    Directory of Open Access Journals (Sweden)

    Kelvin C.P. Wang

    2015-12-01

    Full Text Available The latest iteration of PaveVision3D Ultra can obtain true 1 mm resolution 3D data at full-lane coverage in all 3 directions at highway speed up to 60 mph. This paper introduces the PaveVision3D Ultra technology for rapid network level pavement survey on approximately 1280 center miles of Oklahoma interstate highways. With sophisticated automated distress analyzer (ADA software interface, the collected 1 mm 3D data provide Oklahoma Department of Transportation (ODOT with comprehensive solutions for automated evaluation of pavement surface including longitudinal profile for roughness, transverse profile for rutting, predicted hydroplaning speed for safety analysis, and cracking and various surface defects for distresses. The pruned exact linear time (PELT method, an optimal partitioning algorithm, is implemented to identify change points and dynamically determine homogeneous segments so as to assist ODOT effectively using the available 1 mm 3D pavement surface condition data for decision-making. The application of 1 mm 3D laser imaging technology for network survey is unprecedented. This innovative technology allows highway agencies to access its options in using the 1 mm 3D system for its design and management purposes, particularly to meet the data needs for pavement management system (PMS, pavement ME design and highway performance monitoring system (HPMS.

  11. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Extra Dimensions: 3D in PDF Documentation

    International Nuclear Information System (INIS)

    Graf, Norman A

    2012-01-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  13. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  14. VISION development

    International Nuclear Information System (INIS)

    Hernandez, J.E.; Sherwood, R.J.; Whitman, S.R.

    1994-01-01

    VISION is a flexible and extensible object-oriented programming environment for prototyping computer-vision and pattern-recognition algorithms. This year's effort focused on three major areas: documentation, graphics, and support for new applications

  15. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    Science.gov (United States)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  16. Advanced 3D Printers for Cellular Solids

    Science.gov (United States)

    2016-06-30

    06-2016 1-Aug-2014 31-Dec-2015 Final Report: Advanced 3D printers for Cellular Solids The views, opinions and/or findings contained in this report are...2211 3d printing, cellular solids REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8...Papers published in non peer-reviewed journals: Final Report: Advanced 3D printers for Cellular Solids Report Title Final Report for DURIP grant W911NF

  17. Pharmacophore definition and 3D searches.

    Science.gov (United States)

    Langer, T; Wolber, G

    2004-12-01

    The most common pharmacophore building concepts based on either 3D structure of the target or ligand information are discussed together with the application of such models as queries for 3D database search. An overview of the key techniques available on the market is given and differences with respect to algorithms used and performance obtained are highlighted. Pharmacophore modelling and 3D database search are shown to be successful tools for enriching screening experiments aimed at the discovery of novel bio-active compounds.: © 2004 Elsevier Ltd . All rights reserved.

  18. 3D radiative transfer in stellar atmospheres

    International Nuclear Information System (INIS)

    Carlsson, M

    2008-01-01

    Three-dimensional (3D) radiative transfer in stellar atmospheres is reviewed with special emphasis on the atmospheres of cool stars and applications. A short review of methods in 3D radiative transfer shows that mature methods exist, both for taking into account radiation as an energy transport mechanism in 3D (magneto-) hydrodynamical simulations of stellar atmospheres and for the diagnostic problem of calculating the emergent spectrum in more detail from such models, both assuming local thermodynamic equilibrium (LTE) and in non-LTE. Such methods have been implemented in several codes, and examples of applications are given.

  19. Nonperturbative summation over 3D discrete topologies

    International Nuclear Information System (INIS)

    Freidel, Laurent; Louapre, David

    2003-01-01

    The group field theories realizing the sum over all triangulations of all topologies of 3D discrete gravity amplitudes are known to be nonuniquely Borel summable. We modify these models to construct a new group field theory which is proved to be uniquely Borel summable, defining in an unambiguous way a nonperturbative sum over topologies in the context of 3D dynamical triangulations and spin foam models. Moreover, we give some arguments to support the fact that, despite our modification, this new model is similar to the original one, and therefore could be taken as a definition of the sum over topologies of 3D quantum gravity amplitudes

  20. 3D background aerodynamics using CFD

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, N.N.

    2002-11-01

    3D rotor computations for the Greek Geovilogiki (GEO) 44 meter rotor equipped with 19 meters blades are performed. The lift and drag polars are extracted at five spanvise locations r/R= (.37, .55, .71, .82, .93) based on identification of stagnation points between 2D and 3D computations. The inner most sections shows clear evidence of 3D radial pumping, with increased lift compared to 2D values. In contrast to earlier investigated airfoils a very limited impact on the drag values are observed. (au)

  1. 3D Printing the ATLAS' barrel toroid

    CERN Document Server

    Goncalves, Tiago Barreiro

    2016-01-01

    The present report summarizes my work as part of the Summer Student Programme 2016 in the CERN IR-ECO-TSP department (International Relations – Education, Communication & Outreach – Teacher and Student Programmes). Particularly, I worked closely with the S’Cool LAB team on a science education project. This project included the 3D designing, 3D printing, and assembling of a model of the ATLAS’ barrel toroid. A detailed description of the project' development is presented and a short manual on how to use 3D printing software and hardware is attached.

  2. [3D planning in maxillofacial surgery].

    Science.gov (United States)

    Hoarau, R; Zweifel, D; Lanthemann, E; Zrounba, H; Broome, M

    2014-10-01

    The development of new technologies such as three-dimensional (3D) planning has changed the everyday practice in maxillofacial surgery. Rapid prototyping associated with the 3D planning has also enabled the creation of patient specific surgical tools, such as cutting guides. As with all new technologies, uses, practicalities, cost effectiveness and especially benefits for the patients have to be carefully evaluated. In this paper, several examples of 3D planning that have been used in our institution are presented. The advantages such as the accuracy of the reconstructive surgery and decreased operating time, as well as the difficulties have also been addressed.

  3. Participation and 3D Visualization Tools

    DEFF Research Database (Denmark)

    Mullins, Michael; Jensen, Mikkel Holm; Henriksen, Sune

    2004-01-01

    With a departure point in a workshop held at the VR Media Lab at Aalborg University , this paper deals with aspects of public participation and the use of 3D visualisation tools. The workshop grew from a desire to involve a broad collaboration between the many actors in the city through using new...... perceptions of architectural representation in urban design where 3D visualisation techniques are used. It is the authors? general finding that, while 3D visualisation media have the potential to increase understanding of virtual space for the lay public, as well as for professionals, the lay public require...

  4. 3D Bio-Printing Review

    Science.gov (United States)

    Du, Xianbin

    2018-01-01

    Ultimate goal of tissue engineering is to replace pathological or necrotic body tissue or organ by artificial tissue or organ and tissue engineering is a very promising research field. 3D bio-printing is a kind of emerging technologies and a branch of tissue engineering. It has made significant progress in the past decade. 3D bio-printing can realize tissue and organ construction in vitro and has wide application in basic research and pharmacy. This paper is to make an analysis and review on 3D bio-printing from the perspectives of bioink, printing technology and technology application.

  5. 3D printed magnetic polymer composite transformers

    Science.gov (United States)

    Bollig, Lindsey M.; Hilpisch, Peter J.; Mowry, Greg S.; Nelson-Cheeseman, Brittany B.

    2017-11-01

    The possibility of 3D printing a transformer core using fused deposition modeling methods is explored. With the use of additive manufacturing, ideal transformer core geometries can be achieved in order to produce a more efficient transformer. In this work, different 3D printed settings and toroidal geometries are tested using a custom integrated magnetic circuit capable of measuring the hysteresis loop of a transformer. These different properties are then characterized, and it was determined the most effective 3D printed transformer core requires a high fill factor along with a high concentration of magnetic particulate.

  6. An Improved Version of TOPAZ 3D

    International Nuclear Information System (INIS)

    Krasnykh, Anatoly

    2003-01-01

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results

  7. 3D face modeling, analysis and recognition

    CERN Document Server

    Daoudi, Mohamed; Veltkamp, Remco

    2013-01-01

    3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s

  8. 3D background aerodynamics using CFD

    DEFF Research Database (Denmark)

    Sørensen, Niels N.

    2002-01-01

    3D rotor computations for the Greek Geovilogiki (GEO) 44 meter rotor equipped with 19 meters blades are performed. The lift and drag polars are extracted at five spanvise locations r/R= (.37, .55, .71, .82, .93) based on identification of stagnationpoints between 2D and 3D computations. The inner...... most sections shows clear evidence of 3D radial pumping, with increased lift compared to 2D values. In contrast to earlier investigated airfoils a very limited impact on the drag values are observed....

  9. FUN3D Manual: 13.3

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2018-01-01

    This manual describes the installation and execution of FUN3D version 13.3, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  10. FUN3D Manual: 12.8

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  11. FUN3D Manual: 13.1

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  12. FUN3D Manual: 13.2

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.2, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. FUN3D Manual: 12.9

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  14. FUN3D Manual: 13.0

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. FUN3D Manual: 12.7

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  16. Determination of the 3d34d and 3d35s configurations of Fe V

    International Nuclear Information System (INIS)

    Azarov, V.I.

    2001-01-01

    The analysis of the spectrum of four times ionized iron, Fe V, has led to the determination of the 3d 3 4d and 3d 3 5s configurations. From 975 classified lines in the region 645-1190 A we have established 123 of 168 theoretically possible 3d 3 4d levels and 26 of 38 possible 3d 3 5s levels. The estimated accuracy of values of energy levels of these two configurations is about 0.7 cm -1 and 1.0 cm -1 , respectively. The level structure of the system of the 3d 4 , 3d 3 4s, 3d 3 4d and 3d 3 5s configurations has been theoretically interpreted and the energy parameters have been determined by a least squares fit to the observed levels. A comparison of parameters in Cr III and Fe V ions is given. (orig.)

  17. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  18. Ergonomic evaluation of ubiquitous computing with monocular head-mounted display

    Science.gov (United States)

    Kawai, Takashi; Häkkinen, Jukka; Yamazoe, Takashi; Saito, Hiroko; Kishi, Shinsuke; Morikawa, Hiroyuki; Mustonen, Terhi; Kaistinen, Jyrki; Nyman, Göte

    2010-01-01

    In this paper, the authors conducted an experiment to evaluate the UX in an actual outdoor environment, assuming the casual use of monocular HMD to view video content while short walking. In conducting the experiment, eight subjects were asked to view news videos on a monocular HMD while walking through a large shopping mall. Two types of monocular HMDs and a hand-held media player were used, and the psycho-physiological responses of the subjects were measured before, during, and after the experiment. The VSQ, SSQ and NASA-TLX were used to assess the subjective workloads and symptoms. The objective indexes were heart rate and stride and a video recording of the environment in front of the subject's face. The results revealed differences between the two types of monocular HMDs as well as between the monocular HMDs and other conditions. Differences between the types of monocular HMDs may have been due to screen vibration during walking, and it was considered as a major factor in the UX in terms of the workload. Future experiments to be conducted in other locations will have higher cognitive loads in order to study the performance and the situation awareness to actual and media environments.

  19. Restoration of binocular vision in amblyopia.

    Science.gov (United States)

    Hess, R F; Mansouri, B; Thompson, B

    2011-09-01

    To develop a treatment for amblyopia based on re-establishing binocular vision. A novel procedure is outlined for measuring and reducing the extent to which the fixing eye suppresses the fellow amblyopic eye in adults with amblyopia. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate that strabismic amblyopes can combine information normally between their eyes under viewing conditions where suppression is reduced by presenting stimuli of different contrast to each eye. Furthermore we show that prolonged periods of binocular combination leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Additionally, stereoscopic function was established in the majority of patients tested. We have implemented this approach on a headmounted device as well as on a handheld iPod. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  20. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  1. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    Science.gov (United States)

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  2. 3D-modeling and 3D-printing explorations on Japanese tea ceremony utensils

    NARCIS (Netherlands)

    Levy, P.D.; Yamada, Shigeru

    2017-01-01

    In this paper, we inquire aesthetical aspects of the Japanese tea ceremony, described as the aesthetics in the imperfection, based on novel fabrication technologies: 3D-modeling and 3D-printing. To do so, 3D-printed utensils (chashaku and chasen) were iteratively designed for the ceremony and were

  3. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  4. Estimating 3D Object Parameters from 2D Grey-Level Images

    NARCIS (Netherlands)

    Houkes, Z.

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts

  5. Designing Biomaterials for 3D Printing.

    Science.gov (United States)

    Guvendiren, Murat; Molde, Joseph; Soares, Rosane M D; Kohn, Joachim

    2016-10-10

    Three-dimensional (3D) printing is becoming an increasingly common technique to fabricate scaffolds and devices for tissue engineering applications. This is due to the potential of 3D printing to provide patient-specific designs, high structural complexity, rapid on-demand fabrication at a low-cost. One of the major bottlenecks that limits the widespread acceptance of 3D printing in biomanufacturing is the lack of diversity in "biomaterial inks". Printability of a biomaterial is determined by the printing technique. Although a wide range of biomaterial inks including polymers, ceramics, hydrogels and composites have been developed, the field is still struggling with processing of these materials into self-supporting devices with tunable mechanics, degradation, and bioactivity. This review aims to highlight the past and recent advances in biomaterial ink development and design considerations moving forward. A brief overview of 3D printing technologies focusing on ink design parameters is also included.

  6. Tissue and Organ 3D Bioprinting.

    Science.gov (United States)

    Xia, Zengmin; Jin, Sha; Ye, Kaiming

    2018-02-01

    Three-dimensional (3D) bioprinting enables the creation of tissue constructs with heterogeneous compositions and complex architectures. It was initially used for preparing scaffolds for bone tissue engineering. It has recently been adopted to create living tissues, such as cartilage, skin, and heart valve. To facilitate vascularization, hollow channels have been created in the hydrogels by 3D bioprinting. This review discusses the state of the art of the technology, along with a broad range of biomaterials used for 3D bioprinting. It provides an update on recent developments in bioprinting and its applications. 3D bioprinting has profound impacts on biomedical research and industry. It offers a new way to industrialize tissue biofabrication. It has great potential for regenerating tissues and organs to overcome the shortage of organ transplantation.

  7. Mobile 3D Viewer Supporting RFID System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J J; Yang, S W; Choi, Y [Chungang Univ., Seoul (Korea, Republic of)

    2007-07-01

    As hardware capabilities of mobile devices are being rapidly enhanced, applications based upon mobile devices are also being developed in wider areas. In this paper, a prototype mobile 3D viewer with the object identification through RFID system is presented. To visualize 3D engineering data such as CAD data, we need a process to compute triangulated data from boundary based surface like B-rep solid or trimmed surfaces. Since existing rendering engines on mobile devices do not provide triangulation capability, mobile 3D programs have focused only on an efficient handling with pre-tessellated geometry. We have developed a light and fast triangulation process based on constrained Delaunay triangulation suitable for mobile devices in the previous research. This triangulation software is used as a core for the mobile 3D viewer on a PDA with RFID system that may have potentially wide applications in many areas.

  8. 3D Maps Representation Using GNG

    Directory of Open Access Journals (Sweden)

    Vicente Morell

    2014-01-01

    Full Text Available Current RGB-D sensors provide a big amount of valuable information for mobile robotics tasks like 3D map reconstruction, but the storage and processing of the incremental data provided by the different sensors through time quickly become unmanageable. In this work, we focus on 3D maps representation and propose the use of the Growing Neural Gas (GNG network as a model to represent 3D input data. GNG method is able to represent the input data with a desired amount of neurons or resolution while preserving the topology of the input space. Experiments show how GNG method yields a better input space adaptation than other state-of-the-art 3D map representation methods.

  9. Advances in 3D neuronal cell culture

    NARCIS (Netherlands)

    Frimat, Jean Philippe; Xie, Sijia; Bastiaens, Alex; Schurink, Bart; Wolbers, Floor; Den Toonder, Jaap; Luttge, Regina

    2015-01-01

    In this contribution, the authors present our advances in three-dimensional (3D) neuronal cell culture platform technology contributing to controlled environments for microtissue engineering and analysis of cellular physiological and pathological responses. First, a micromachined silicon sieving

  10. 3D VISUALIZATION FOR VIRTUAL MUSEUM DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    M. Skamantzari

    2016-06-01

    Full Text Available The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.

  11. Intrinsic defects in 3D printed materials

    OpenAIRE

    Bolton, Christopher; Dagastine, Raymond

    2015-01-01

    We discuss the impact of bulk structural defects on the coherence, phase and polarisation of light passing through transparent 3D printed materials fabricated using a variety of commercial print technologies.

  12. Mobile 3D Viewer Supporting RFID System

    International Nuclear Information System (INIS)

    Kim, J. J.; Yang, S. W.; Choi, Y.

    2007-01-01

    As hardware capabilities of mobile devices are being rapidly enhanced, applications based upon mobile devices are also being developed in wider areas. In this paper, a prototype mobile 3D viewer with the object identification through RFID system is presented. To visualize 3D engineering data such as CAD data, we need a process to compute triangulated data from boundary based surface like B-rep solid or trimmed surfaces. Since existing rendering engines on mobile devices do not provide triangulation capability, mobile 3D programs have focused only on an efficient handling with pre-tessellated geometry. We have developed a light and fast triangulation process based on constrained Delaunay triangulation suitable for mobile devices in the previous research. This triangulation software is used as a core for the mobile 3D viewer on a PDA with RFID system that may have potentially wide applications in many areas

  13. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  14. Radiosity diffusion model in 3D

    Science.gov (United States)

    Riley, Jason D.; Arridge, Simon R.; Chrysanthou, Yiorgos; Dehghani, Hamid; Hillman, Elizabeth M. C.; Schweiger, Martin

    2001-11-01

    We present the Radiosity-Diffusion model in three dimensions(3D), as an extension to previous work in 2D. It is a method for handling non-scattering spaces in optically participating media. We present the extension of the model to 3D including an extension to the model to cope with increased complexity of the 3D domain. We show that in 3D more careful consideration must be given to the issues of meshing and visibility to model the transport of light within reasonable computational bounds. We demonstrate the model to be comparable to Monte-Carlo simulations for selected geometries, and show preliminary results of comparisons to measured time-resolved data acquired on resin phantoms.

  15. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh; Hadwiger, Markus; Ben Romdhane, Mohamed; Behzad, Ali Reza; Madhavan, Poornima; Nunes, Suzana Pereira

    2016-01-01

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore

  16. 3D-printed Bioanalytical Devices

    Science.gov (United States)

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-01-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  17. Eyes on the Earth 3D

    Science.gov (United States)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  18. Expedient Gap Definition Using 3D LADAR

    National Research Council Canada - National Science Library

    Edwards, Lulu; Jersey, Sarah R

    2006-01-01

    .... Army Engineer Research and Development Center (ERDC), ASI has developed an algorithm to reduce the 3D point cloud acquired with the LADAR system into sets of 2D profiles that describe the terrain...

  19. 3D modeling of the marine relief

    OpenAIRE

    Mànuel-González, Bernat; Garcia Benadí, Albert; Río Fernandez, Joaquín del; Cadena Muñoz, Francisco Javier; Manuel Lázaro, Antonio

    2012-01-01

    The article detail the systematic process for transformation the 2D representation to 3D representation, likewise the systematic process for gather up of data, and the considerations and instrumentation necessary for this action. Peer Reviewed

  20. 3D Visualization for Planetary Missions

    Science.gov (United States)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.