WorldWideScience

Sample records for 3d object depth

  1. Combining depth and color data for 3D object recognition

    Science.gov (United States)

    Joergensen, Thomas M.; Linneberg, Christian; Andersen, Allan W.

    1997-09-01

    This paper describes the shape recognition system that has been developed within the ESPRIT project 9052 ADAS on automatic disassembly of TV-sets using a robot cell. Depth data from a chirped laser radar are fused with color data from a video camera. The sensor data is pre-processed in several ways and the obtained representation is used to train a RAM neural network (memory based reasoning approach) to detect different components within TV-sets. The shape recognizing architecture has been implemented and tested in a demonstration setup.

  2. 3D Object Recognition and Facial Identification Using Time-averaged Single-views from Time-of-flight 3D Depth-Camera

    OpenAIRE

    Ding, Hui; Moutarde, Fabien; Shaiek, Ayet

    2010-01-01

    International audience We report here on feasibility evaluation experiments for 3D object recognition and person facial identification from single-view on real depth images acquired with an “off-the-shelf” 3D time-of-flight depth camera. Our methodology is the following: for each person or object, we perform 2 independent recordings, one used for learning and the other one for test purposes. For each recorded frame, a 3D-mesh is computed by simple triangulation from the filtered depth imag...

  3. Real-time depth map manipulation for 3D visualization

    Science.gov (United States)

    Ideses, Ianir; Fishbain, Barak; Yaroslavsky, Leonid

    2009-02-01

    One of the key aspects of 3D visualization is computation of depth maps. Depth maps enables synthesis of 3D video from 2D video and use of multi-view displays. Depth maps can be acquired in several ways. One method is to measure the real 3D properties of the scene objects. Other methods rely on using two cameras and computing the correspondence for each pixel. Once a depth map is acquired for every frame, it can be used to construct its artificial stereo pair. There are many known methods for computing the optical flow between adjacent video frames. The drawback of these methods is that they require extensive computation power and are not very well suited to high quality real-time 3D rendering. One efficient method for computing depth maps is extraction of motion vector information from standard video encoders. In this paper we present methods to improve the 3D visualization quality acquired from compression CODECS by spatial/temporal and logical operations and manipulations. We show how an efficient real time implementation of spatial-temporal local order statistics such as median and local adaptive filtering in 3D-DCT domain can substantially improve the quality of depth maps and consequently 3D video while retaining real-time rendering. Real-time performance is achived by utilizing multi-core technology using standard parallelization algorithms and libraries (OpenMP, IPP).

  4. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  5. Advanced 3D Object Identification System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Optra will build an Advanced 3D Object Identification System utilizing three or more high resolution imagers spaced around a launch platform. Data from each imager...

  6. Lifting Object Detection Datasets into 3D.

    Science.gov (United States)

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  7. Algorithms for 3D shape scanning with a depth camera.

    Science.gov (United States)

    Cui, Yan; Schuon, Sebastian; Thrun, Sebastian; Stricker, Didier; Theobalt, Christian

    2013-05-01

    We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a Time-of-Flight (ToF) camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology, they bear potential for economical production in big volumes. Our easy-to-use, cost-effective scanning solution, which is based on such a sensor, could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensor's level of random noise is substantial and there is a nontrivial systematic bias. In this paper, we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.

  8. 3D-PRINTING OF BUILD OBJECTS

    Directory of Open Access Journals (Sweden)

    SAVYTSKYI M. V.

    2016-03-01

    Full Text Available Raising of problem. Today, in all spheres of our life we can constate the permanent search for new, modern methods and technologies that meet the principles of sustainable development. New approaches need to be, on the one hand more effective in terms of conservation of exhaustible resources of our planet, have minimal impact on the environment and on the other hand to ensure a higher quality of the final product. Construction is not exception. One of the new promising technology is the technology of 3D -printing of individual structures and buildings in general. 3Dprinting - is the process of real object recreating on the model of 3D. Unlike conventional printer which prints information on a sheet of paper, 3D-printer allows you to display three-dimensional information, i.e. creates certain physical objects. Currently, 3D-printer finds its application in many areas of production: machine building elements, a variety of layouts, interior elements, various items. But due to the fact that this technology is fairly new, it requires the creation of detailed and accurate technologies, efficient equipment and materials, and development of common vocabulary and regulatory framework in this field. Research Aim. The analysis of existing methods of creating physical objects using 3D-printing and the improvement of technology and equipment for the printing of buildings and structures. Conclusion. 3D-printers building is a new generation of equipment for the construction of buildings, structures, and structural elements. A variety of building printing technics opens up wide range of opportunities in the construction industry. At this stage, printers design allows to create low-rise buildings of different configurations with different mortars. The scientific novelty of this work is to develop proposals to improve the thermal insulation properties of constructed 3D-printing objects and technological equipment. The list of key terms and notions of construction

  9. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    Science.gov (United States)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  10. Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection

    International Nuclear Information System (INIS)

    This work proposes a novel approach to segmenting randomly stacked objects in unstructured 3D point clouds, which are acquired by a random-speckle 3D imaging system for the purpose of automated object detection and reconstruction. An innovative algorithm is proposed; it is based on a novel concept of 3D watershed segmentation and the strategies for resolving over-segmentation and under-segmentation problems. Acquired 3D point clouds are first transformed into a corresponding orthogonally projected depth map along the optical imaging axis of the 3D sensor. A 3D watershed algorithm based on the process of distance transformation is then performed to detect the boundary, called the edge dam, between stacked objects and thereby to segment point clouds individually belonging to two stacked objects. Most importantly, an object-matching algorithm is developed to solve the over- and under-segmentation problems that may arise during the watershed segmentation. The feasibility and effectiveness of the method are confirmed experimentally. The results reveal that the proposed method is a fast and effective scheme for the detection and reconstruction of a 3D object in a random stack of such objects. In the experiments, the precision of the segmentation exceeds 95% and the recall exceeds 80%. (paper)

  11. The 3D Object Mediator : Handling 3D Models on Internet

    NARCIS (Netherlands)

    Kok, A.J.F.; Lawick van Pabst, J. van; Afsarmanesh, H.

    1997-01-01

    The 3D Object MEdiator (3DOME 3) offers two services for handling 3D models: a modelshop and a renderfarm. These services can be consulted through the Internet. The modelshop meets the demands for brokerage of geometric descriptions of 3D models. People who create geometric models of objects can sup

  12. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  13. Combining different modalities for 3D imaging of biological objects

    International Nuclear Information System (INIS)

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  14. Holoscopic 3D image depth estimation and segmentation techniques

    OpenAIRE

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  15. Advanced 3D Object Identification System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — During the Phase I effort, OPTRA developed object detection, tracking, and identification algorithms and successfully tested these algorithms on computer-generated...

  16. Bimanual Volume Perception of 3-D Objects

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kahrimanovic, M.; Kappers, A.M.L.

    2011-01-01

    In the present study, blindfolded subjects had to explore differently shaped objects with two hands and to judge their volume. The results showed a significant effect of the shape of objects on their perceived volume. Additional analysis showed that this effect could not be explained by the subjects

  17. 3D hand tracking using Kalman filter in depth space

    Science.gov (United States)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  18. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  19. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...... interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract...

  20. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    Science.gov (United States)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  1. Acquiring 3-D Spatial Data Of A Real Object

    Science.gov (United States)

    Wu, C. K.; Wang, D. Q.; Bajcsy, R. K...

    1983-10-01

    A method of acquiring spatial data of a real object via a stereometric system is presented. Three-dimensional (3-D) data of an object are acquired by: (1) camera calibration; (2) stereo matching; (3) multiple stereo views covering the whole object; (4) geometrical computations to determine the 3-D coordinates for each sample point of the object. The analysis and the experimental results indicate the method implemented is capable of measuring the spatial data of a real object with satisfactory accuracy.

  2. Several Strategies on 3D Modeling of Manmade Objects

    Institute of Scientific and Technical Information of China (English)

    SHAO Zhenfeng; LI Deren; CHENG Qimin

    2004-01-01

    Several different strategies of 3D modeling are adopted for different kinds of manmade objects. Firstly, for those manmade objects with regular structure, if 2D information is available and elevation information can be obtained conveniently, then 3D modeling of them can be executed directly. Secondly, for those manmade objects with complicated structure comparatively and related stereo images pair can be acquired, in the light of topology-based 3D model we finish 3D modeling of them by integrating automatic and semi-automatic object extraction. Thirdly, for the most complicated objects whose geometrical information cannot be got from stereo images pair completely, we turn to topological 3D model based on CAD.

  3. 2D but not 3D: pictorial-depth deficits in a case of visual agnosia.

    Science.gov (United States)

    Turnbull, Oliver H; Driver, Jon; McCarthy, Rosaleen A

    2004-01-01

    Patients with visual agnosia exhibit acquired impairments in visual object recognition, that may or may not involve deficits in low-level perceptual abilities. Here we report a case (patient DM) who after head injury presented with object-recognition deficits. He still appears able to extract 2D information from the visual world in a relatively intact manner; but his ability to extract pictorial information about 3D object-structure is greatly compromised. His copying of line drawings is relatively good, and he is accurate and shows apparently normal mental rotation when matching or judging objects tilted in the picture-plane. But he performs poorly on a variety of tasks requiring 3D representations to be derived from 2D stimuli, including: performing mental rotation in depth, rather than in the picture-plane; judging the relative depth of two regions depicted in line-drawings of objects; and deciding whether a line-drawing represents an object that is 'impossible' in 3D. Interestingly, DM failed to show several visual illusions experienced by normals (Muller-Lyer and Ponzo), that some authors have attributed to pictorial depth cues. Taken together, these findings indicate a deficit in achieving 3D intepretations of objects from 2D pictorial cues, that may contribute to object-recognition problems in agnosia.

  4. DESIGN OF 3D TOPOLOGICAL DATA STRUCTURE FOR 3D CADASTRE OBJECTS

    Directory of Open Access Journals (Sweden)

    N. A. Zulkifli

    2016-09-01

    Full Text Available This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM specifications. Tetrahedral Network (TEN is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit. This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  5. Design of 3d Topological Data Structure for 3d Cadastre Objects

    Science.gov (United States)

    Zulkifli, N. A.; Rahman, A. Abdul; Hassan, M. I.

    2016-09-01

    This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM) specifications. Tetrahedral Network (TEN) is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit). This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  6. Automation of 3D micro object handling process

    OpenAIRE

    Gegeckaite, Asta; Hansen, Hans Nørgaard

    2007-01-01

    Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due to the small absolute scale. In this article, the results of the pick-and-place operations of three different 3D micro objects were investigated. This study shows that depending on the correct gripping t...

  7. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem

    2015-04-16

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

  8. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  9. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    Science.gov (United States)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  10. Efficient and high speed depth-based 2D to 3D video conversion

    Science.gov (United States)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  11. Encryption of 3D Point Cloud Object with Deformed Fringe

    Directory of Open Access Journals (Sweden)

    Xin Yang

    2016-01-01

    Full Text Available A 3D point cloud object encryption method was proposed with this study. With the method, a mapping relationship between 3D coordinates was formulated and Z coordinate was transformed to deformed fringe by a phase coding method. The deformed fringe and gray image were used for encryption and decryption with simulated off-axis digital Fresnel hologram. Results indicated that the proposed method is able to accurately decrypt the coordinates and gray image of the 3D object. The method is also robust against occlusion attacks.

  12. Embedding objects during 3D printing to add new functionalities.

    Science.gov (United States)

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  13. Embedding objects during 3D printing to add new functionalities.

    Science.gov (United States)

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  14. Monocular model-based 3D tracking of rigid objects

    CERN Document Server

    Lepetit, Vincent

    2014-01-01

    Many applications require tracking complex 3D objects. These include visual serving of robotic arms on specific target objects, Augmented Reality systems that require real time registration of the object to be augmented, and head tracking systems that sophisticated interfaces can use. Computer vision offers solutions that are cheap, practical and non-invasive. ""Monocular Model-Based 3D Tracking of Rigid Objects"" reviews the different techniques and approaches that have been developed by industry and research. First, important mathematical tools are introduced: camera representation, robust e

  15. Object Recognition Using a 3D RFID System

    OpenAIRE

    Roh, Se-gon; Choi, Hyouk Ryeol

    2009-01-01

    Up to now, object recognition in robotics has been typically done by vision, ultrasonic sensors, laser ranger finders etc. Recently, RFID has emerged as a promising technology that can strengthen object recognition. In this chapter, the 3D RFID system and the 3D tag were presented. The proposed RFID system can determine if an object as well as other tags exists, and also can estimate the orientation and position of the object. This feature considerably reduces the dependence of the robot on o...

  16. Integration of 3D structure from disparity into biological motion perception independent of depth awareness.

    Science.gov (United States)

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.

  17. A QUALITY ASSESSMENT METHOD FOR 3D ROAD POLYGON OBJECTS

    Directory of Open Access Journals (Sweden)

    L. Gao

    2015-08-01

    Full Text Available With the development of the economy, the fast and accurate extraction of the city road is significant for GIS data collection and update, remote sensing images interpretation, mapping and spatial database updating etc. 3D GIS has attracted more and more attentions from academics, industries and governments with the increase of requirements for interoperability and integration of different sources of data. The quality of 3D geographic objects is very important for spatial analysis and decision-making. This paper presents a method for the quality assessment of the 3D road polygon objects which is created by integrating 2D Road Polygon data with LiDAR point cloud and other height information such as Spot Height data in Hong Kong Island. The quality of the created 3D road polygon data set is evaluated by the vertical accuracy, geometric and attribute accuracy, connectivity error, undulation error and completeness error and the final results are presented.

  18. Semantic 3D object maps for everyday robot manipulation

    CERN Document Server

    Rusu, Radu Bogdan

    2013-01-01

    The book written by Dr. Radu B. Rusu presents a detailed description of 3D Semantic Mapping in the context of mobile robot manipulation. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models that include the objects present in the world, together with their position, form, and other semantic aspects, as well as interpretations of these objects with respect to the robot tasks.   The book proposes novel 3D feature representations called Point Feature Histograms (PFH), as well as frameworks for the acquisition and processing of Semantic 3D Object Maps with contributions to robust registration, fast segmentation into regions, and reliable object detection, categorization, and reconstruction. These contributions have been fully implemented and empirically evaluated on different robotic systems, and have been the original kernel to the widely successful open-source project the Point Cloud Library (PCL) -- see http://poi...

  19. Automation of 3D micro object handling process

    DEFF Research Database (Denmark)

    Gegeckaite, Asta; Hansen, Hans Nørgaard

    2007-01-01

    Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due...... to the small absolute scale. In this article, the results of the pick-and-place operations of three different 3D micro objects were investigated. This study shows that depending on the correct gripping tool design as well as handling and assembly scenarios, a high success rate of up to 99% repeatability can...

  20. Depth-of-Focus Affects 3D Perception in Stereoscopic Displays.

    Science.gov (United States)

    Vienne, Cyril; Blondé, Laurent; Mamassian, Pascal

    2015-01-01

    Stereoscopic systems present binocular images on planar surface at a fixed distance. They induce cues to flatness, indicating that images are presented on a unique surface and specifying the relative depth of that surface. The center of interest of this study is on a second problem, arising when a 3D object distance differs from the display distance. As binocular disparity must be scaled using an estimate of viewing distance, object depth can thus be affected through disparity scaling. Two previous experiments revealed that stereoscopic displays can affect depth perception due to conflicting accommodation and vergence cues at near distances. In this study, depth perception is evaluated for farther accommodation and vergence distances using a commercially available 3D TV. In Experiment I, we evaluated depth perception of 3D stimuli at different vergence distances for a large pool of participants. We observed a strong effect of vergence distance that was bigger for younger than for older participants, suggesting that the effect of accommodation was reduced in participants with emerging presbyopia. In Experiment 2, we extended 3D estimations by varying both the accommodation and vergence distances. We also tested the hypothesis that setting accommodation open loop by constricting pupil size could decrease the contribution of focus cues to perceived distance. We found that the depth constancy was affected by accommodation and vergence distances and that the accommodation distance effect was reduced with a larger depth-of-focus. We discuss these results with regard to the effectiveness of focus cues as a distance signal. Overall, these results highlight the importance of appropriate focus cues in stereoscopic displays at intermediate viewing distances.

  1. Modeling real conditions of 'Ukrytie' object in 3D measurement

    International Nuclear Information System (INIS)

    The article covers a technology of creation on soft products basis for designing: AutoCad, and computer graphics and animation 3D Studio, 3DS MAX, of 3D model of geometrical parameters of current conditions of building structures, technological equipment, fuel-containing materials, concrete, water of ruined Unit 4, 'Ukryttia' object, of Chernobyl NPP. The model built using the above technology will be applied in the future as a basis when automating the design and computer modeling of processes at the 'Ukryttia' object

  2. Recognition of 3-D Scene with Partially Occluded Objects

    Science.gov (United States)

    Lu, Siwei; Wong, Andrew K. C...

    1987-03-01

    This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.

  3. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    Science.gov (United States)

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images. PMID:27410090

  4. Depth-based Multi-View 3D Video Coding

    DEFF Research Database (Denmark)

    Zamarin, Marco

    to depth blocks featuring arbitrarily-shaped edges. Edge information is encoded exploiting previously coded edge blocks. Integrated in H.264/AVC, the proposed mode allows significant bit rate savings compared with a number of state-of-the-art depth codecs. View synthesis performances are also improved...

  5. 3-D Object Recognition from Point Cloud Data

    Science.gov (United States)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  6. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    Science.gov (United States)

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  7. Knowledge Base Approach for 3D Objects Detection in Point Clouds Using 3D Processing and Specialists Knowledge

    OpenAIRE

    Ben Hmida, Helmi; Cruz, Christophe; Boochs, Frank; Nicolle, Christophe

    2013-01-01

    International audience This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist's knowledge. Here, we share our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. Thi...

  8. Manipulating 3D Objects with Gaze and Hand Gestures

    OpenAIRE

    Koskenranta, Olli

    2012-01-01

    Gesture-based interaction in consumer electronics is becoming more popular these days, for example, when playing games with Microsoft Kinect, PlayStation 3 Move and Nintendo Wii. The objective of this thesis was to find out how to use gaze and hand gestures for manipulating objects in a 3D space for the best user experience possible. This thesis was made at the University of Oulu, Center for Internet Excellence and was a part of the research project “Chiru”. The goal was to research and p...

  9. EFFICIENT IMPLEMENTATION OF 3D FILTER FOR MOVING OBJECT EXTRACTION

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper the design and implementation of Multi-Dimensional (MD) filter, particularly 3-Dimensional (3D) filter, are presented. Digital (discrete domain) filters applied to image and video signal processing using the novel 3D multirate algorithms for efficient implementation of moving object extraction are engineered with an example. The multirate (decimation and/or interpolation) signal processing algorithms can achieve significant savings in computation and memory usage. The proposed algorithm uses the mapping relations of z-transfer functions between non-multirate and multirate mathematical expressions in terms of time-varying coefficient instead of traditional polyphase decomposition counterparts. The mapping properties can be readily used to efficiently analyze and synthesize MD multirate filters.

  10. Weighted Unsupervised Learning for 3D Object Detection

    Directory of Open Access Journals (Sweden)

    Kamran Kowsari

    2016-01-01

    Full Text Available This paper introduces a novel weighted unsuper-vised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene.

  11. Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map

    Directory of Open Access Journals (Sweden)

    Sen Han

    2016-06-01

    Full Text Available Multiple description (MD coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.

  12. Divided attention limits perception of 3-D object shapes.

    Science.gov (United States)

    Scharff, Alec; Palmer, John; Moore, Cathleen M

    2013-01-01

    Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes.

  13. Objective and subjective quality assessment of geometry compression of reconstructed 3D Humans in a 3D virtual room

    NARCIS (Netherlands)

    Mekuria, R.N.; Cesar Garcia, P.S.; Frisiello, A.; Doumanis, I.

    2015-01-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such cod

  14. Objective 3D face recognition: Evolution, approaches and challenges.

    Science.gov (United States)

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies.

  15. A new method to create depth information based on lighting analysis for 2D/3D conversion

    Institute of Scientific and Technical Information of China (English)

    Hyunho; Han; Gangseong; Lee; Jongyong; Lee; Jinsoo; Kim; Sanghun; Lee

    2013-01-01

    A new method creating depth information for 2D/3D conversion was proposed. The distance between objects is determined by the distances between objects and light source position which is estimated by the analysis of the image. The estimated lighting value is used to normalize the image. A threshold value is determined by some weighted operation between the original image and the normalized image. By applying the threshold value to the original image, background area is removed. Depth information of interested area is calculated from the lighting changes. The final 3D images converted with the proposed method are used to verify its effectiveness.

  16. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  17. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  18. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    Science.gov (United States)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  19. A Prototypical 3D Graphical Visualizer for Object-Oriented Systems

    Institute of Scientific and Technical Information of China (English)

    1996-01-01

    is paper describes a framework for visualizing object-oriented systems within a 3D interactive environment.The 3D visualizer represents the structure of a program as Cylinder Net that simultaneously specifies two relationships between objects within 3D virtual space.Additionally,it represents additional relationships on demand when objects are moved into local focus.The 3D visualizer is implemented using a 3D graphics toolkit,TOAST,that implements 3D Widgets 3D graphics to ease the programming task for 3D visualization.

  20. Modeling 3D Objects for Navigation Purposes Using Laser Scanning

    Directory of Open Access Journals (Sweden)

    Cezary Specht

    2016-07-01

    Full Text Available The paper discusses the creation of 3d models and their applications in navigation. It contains a review of available methods and geometric data sources, focusing mostly on terrestrial laser scanning. It presents detailed description, from field survey to numerical elaboration, how to construct accurate model of a typical few storey building as a hypothetical reference in complex building navigation. Hence, the paper presents fields where 3d models are being used and their potential new applications.

  1. Incipit 3D documentations projects: some examples and objectives

    OpenAIRE

    Mañana-Borrazás, Patricia

    2013-01-01

    Presentación de la autora y del Incipit y su orientación respecto al uso de nuevas tecnologías aplicadas a la documentación 3D del patrimonio, con especial atención a los retos que supone este tipo de tecnologías en la “Virtual Heritage School on Digital Cultural Heritage 2013 (3D documentation, knowledge repositories and creative industries)” Nicosia 30 de mayo de 2013.

  2. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    Science.gov (United States)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  3. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  4. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  5. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  6. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    Science.gov (United States)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  7. Depth and Intensity Gabor Features Based 3D Face Recognition Using Symbolic LDA and AdaBoost

    Directory of Open Access Journals (Sweden)

    P. S. Hiremath

    2013-11-01

    Full Text Available In this paper, the objective is to investigate what contributions depth and intensity information make to the solution of face recognition problem when expression and pose variations are taken into account, and a novel system is proposed for combining depth and intensity information in order to improve face recognition performance. In the proposed approach, local features based on Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in symbolic linear discriminant analysis (Symbolic LDA with AdaBoost learning is proposed to select the most effective and robust features and to construct a strong classifier. Experiments are performed on the three datasets, namely, Texas 3D face database, Bhosphorus 3D face database and CASIA 3D face database, which contain face images with complex variations, including expressions, poses and longtime lapses between two scans. The experimental results demonstrate the enhanced effectiveness in the performance of the proposed method. Since most of the design processes are performed automatically, the proposed approach leads to a potential prototype design of an automatic face recognition system based on the combination of the depth and intensity information in face images.

  8. A new method to enlarge a range of continuously perceived depth in DFD (depth-fused 3D) display

    Science.gov (United States)

    Tsunakawa, Atsuhiro; Soumiya, Tomoki; Horikawa, Yuta; Yamamoto, Hirotsugu; Suyama, Shiro

    2013-03-01

    We can successfully solve the problem in DFD display that the maximum depth difference of front and rear planes is limited because depth fusing from front and rear images to one 3-D image becomes impossible. The range of continuously perceived depth was estimated as depth difference of front and rear planes increases. When the distance was large enough, perceived depth was near front plane at 0~40 % of rear luminance and near rear plane at 60~100 % of rear luminance. This maximum depth range can be successfully enlarged by spatial-frequency modulation of front and rear images. The change of perceived depth dependence was evaluated when high frequency component of front and rear images is cut off using Fourier Transformation at the distance between front and rear plane of 5 and 10 cm (4.9 and 9.4 minute of arc). When high frequency component does not cut off enough at the distance of 5 cm, perceived depth was separated to near front plane and near rear plane. However, when the images are blurred enough by cutting high frequency component, the perceived depth has a linear dependency on luminance ratio. When the images are not blurred at the distance of 10 cm, perceived depth is separated to near front plane at 0~30% of rear luminance, near rear plane at 80~100 % and near midpoint at 40~70 %. However, when the images are blurred enough, perceived depth successfully has a linear dependency on luminance ratio.

  9. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    Science.gov (United States)

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  10. Object-oriented urban 3D spatial data model organization method

    Science.gov (United States)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  11. Learning the 3-D structure of objects from 2-D views depends on shape, not format.

    Science.gov (United States)

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-05-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  12. Depth-Based Object Tracking Using a Robust Gaussian Filter

    OpenAIRE

    Issac, Jan; Wüthrich, Manuel; Cifuentes, Cristina Garcia; Bohg, Jeannette; Trimpe, Sebastian; Schaal, Stefan

    2016-01-01

    We consider the problem of model-based 3D-tracking of objects given dense depth images as input. Two difficulties preclude the application of a standard Gaussian filter to this problem. First of all, depth sensors are characterized by fat-tailed measurement noise. To address this issue, we show how a recently published robustification method for Gaussian filters can be applied to the problem at hand. Thereby, we avoid using heuristic outlier detection methods that simply reject measurements i...

  13. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    Science.gov (United States)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  14. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Directory of Open Access Journals (Sweden)

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  15. A Taxonomy of 3D Occluded Objects Recognition Techniques

    Science.gov (United States)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  16. Motion Vector Sharing and Bitrate Allocation for 3D Video-Plus-Depth Coding

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-08-01

    Full Text Available The video-plus-depth data representation uses a regular texture video enriched with the so-called depth map, providing the depth distance for each pixel. The compression efficiency is usually higher for smooth, gray level data representing the depth map than for classical video texture. However, improvements of the coding efficiency are still possible, taking into account the fact that the video and the depth map sequences are strongly correlated. Classically, the correlation between the texture motion vectors and the depth map motion vectors is not exploited in the coding process. The aim of this paper is to reduce the amount of information for describing the motion of the texture video and of the depth map sequences by sharing one common motion vector field. Furthermore, in the literature, the bitrate control scheme generally fixes for the depth map sequence a percentage of 20% of the texture stream bitrate. However, this fixed percentage can affect the depth coding efficiency, and it should also depend on the content of each sequence. We propose a new bitrate allocation strategy between the texture and its associated per-pixel depth information. We provide comparative analysis to measure the quality of the resulting 3D+t sequences.

  17. Estimation of foot pressure from human footprint depths using 3D scanner

    Science.gov (United States)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  18. 3D object detection from roadside data using laser scanners

    Science.gov (United States)

    Tang, Jimmy; Zakhor, Avideh

    2011-03-01

    The detection of objects on a given road path by vehicles equipped with range measurement devices is important to many civilian and military applications such as obstacle avoidance in autonomous navigation systems. In this thesis, we develop a method to detect objects of a specific size lying on a road using an acquisition vehicle equipped with forward looking Light Detection And Range (LiDAR) sensors and inertial navigation system. We use GPS data to accurately place the LiDAR points in a world map, extract point cloud clusters protruding from the road, and detect objects of interest using weighted random forest trees. We show that our proposed method is effective in identifying objects for several road datasets collected with various object locations and vehicle speeds.

  19. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    Directory of Open Access Journals (Sweden)

    Shouyi Yin

    2015-06-01

    Full Text Available In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video.

  20. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps.

    Science.gov (United States)

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  1. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    Directory of Open Access Journals (Sweden)

    Dennis Edler

    Full Text Available Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems or additional artificial layers (coordinate grids, provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids and content-related, irregular line features (i.e. highways and main streets in official urban topographic maps (scale 1/10,000 further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate and the mean distances of correctly recalled objects (spatial accuracy. It is shown that the True-3D accentuating of grids (depth offset: 5 cm significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  2. A robotic assembly procedure using 3D object reconstruction

    DEFF Research Database (Denmark)

    Chrysostomou, Dimitrios; Bitzidou, Malamati; Gasteratos, Antonios

    -scale product delivery. This work lies within the category of intelligent assembly path planning methods and an object assembly sequence is planned to incorporate the production of an object’s volumetric model by a multi-camera system, its three-dimensional representation with octrees and its construction...

  3. Learning Spatial Relations between Objects From 3D Scenes

    DEFF Research Database (Denmark)

    Fichtl, Severin; Alexander, John; Guerin, Frank;

    2013-01-01

    Ongoing cognitive development during the first years of human life may be the result of a set of developmental mechanisms which are in continuous operation [1]. One such mechanism identified is the ability of the developing child to learn effective preconditions for their behaviours. It has been ...... suggested [2] that through the application of behaviours involving more than one object, infants begin to learn about the relations between objects.......Ongoing cognitive development during the first years of human life may be the result of a set of developmental mechanisms which are in continuous operation [1]. One such mechanism identified is the ability of the developing child to learn effective preconditions for their behaviours. It has been...

  4. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    OpenAIRE

    Sturm, Peter; Maybank, Steve

    1999-01-01

    International audience We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  5. 3D Spectroscopy of Herbig-Haro objects

    CERN Document Server

    López, R; Exter, K M; García-Lorenzo, B; Gómez, G; Meteorologia, D A; Riera, A; Sánchez, S F; Meteorologia, Departament d'Astronomia i

    2005-01-01

    HH 110 and HH 262 are two Herbig-Haro jets with rather peculiar, chaotic morphology. In the two cases, no source suitable to power the jet has been detected along the outflow, at optical or radio wavelengths. Both, previous data and theoretical models, suggest that these objects are tracing an early stage of an HH jet/dense cloud interaction. We present the first results of the integral field spectroscopy observations made with the PMAS spectrophotometer (with the PPAK configuration) of these two turbulent jets. New data of the kinematics in several characteristic HH emission lines are shown. In addition, line-ratio maps have been made, suitable to explore the spatial excitation an density conditions of the jets as a function of their kinematics.

  6. A Normalization Method of Moment Invariants for 3D Objects on Different Manifolds

    Institute of Scientific and Technical Information of China (English)

    HU Ping; XU Dong; LI Hua

    2014-01-01

    3D objects can be stored in computer of different describing ways, such as point set, polyline, polygonal surface and Euclidean distance map. Moment invariants of different orders may have the different magnitude. A method for normalizing moments of 3D objects is proposed, which can set the values of moments of different orders roughly in the same range and be applied to different 3D data formats universally. Then accurate computation of moments for several objects is presented and experiments show that this kind of normalization is very useful for moment invariants in 3D objects analysis and recognition.

  7. OB3D, a new set of 3D Objects available for research: a web-based study

    Directory of Open Access Journals (Sweden)

    Stéphane eBuffat

    2014-10-01

    Full Text Available Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc.

  8. Single-pixel 3D imaging with time-based depth resolution

    CERN Document Server

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  9. Rapid and Inexpensive Reconstruction of 3D Structures for Micro-Objects Using Common Optical Microscopy

    CERN Document Server

    Berejnov, V V

    2009-01-01

    A simple method of constructing the 3D surface of non-transparent micro-objects by extending the depth-of-field on the whole attainable surface is presented. The series of images of a sample are recorded by the sequential movement of the sample with respect to the microscope focus. The portions of the surface of the sample appear in focus in the different images in the series. The indexed series of the in-focus portions of the sample surface is combined in one sharp 2D image and interpolated into the 3D surface representing the surface of an original micro-object. For an image acquisition and processing we use a conventional upright stage microscope that is operated manually, the inexpensive Helicon Focus software, and the open source MeshLab software. Three objects were tested: an inclined flat glass slide with an imprinted 10 um calibration grid, a regular metal 100x100 per inch mesh, and a highly irregular surface of a material known as a porous electrode used in polyelectrolyte fuel cells. The accuracy of...

  10. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    International Nuclear Information System (INIS)

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale

  11. Depth propagation for semi-automatic 2D to 3D conversion

    Science.gov (United States)

    Tolstaya, Ekaterina; Pohl, Petr; Rychagov, Michael

    2015-03-01

    In this paper, we present a method for temporal propagation of depth data that is available for so called key-frames through video sequence. Our method requires that full frame depth information is assigned. Our method utilizes nearest preceding and nearest following key-frames with known depth information. The propagation of depth information from two sides is essential as it allows to solve most occlusion problems correctly. Image matching is based on the coherency sensitive hashing (CSH) method and is done using image pyramids. Disclosed results are compared with temporal interpolation based on motion vectors from optical flow algorithm. The proposed algorithm keeps sharp depth edges of objects even in situations with fast motion or occlusions. It also handles well many situations, when the depth edges don't perfectly correspond with true edges of objects.

  12. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    DEFF Research Database (Denmark)

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent;

    2014-01-01

    Object detection, recognition and pose estimation in 3D images have gained momentum due to availability of 3D sensors (RGB-D) and increase of large scale 3D data, such as city maps. The most popular approach is to extract and match 3D shape descriptors that encode local scene structure, but omits......, we propose an alternative appearance-driven approach which rst extracts 2D primitives justi ed by Marr's primal sketch, which are \\accumulated" over multiple views and the most stable ones are \\promoted" to 3D visual primitives. The 3D promoted primitives represent both structure and appearance....... For recognition, we propose a fast and eective correspondence matching using random sampling. For quantitative evaluation we construct a semi-synthetic benchmark dataset using a public 3D model dataset of 119 kitchen objects and another benchmark of challenging street-view images from 4 dierent cities...

  13. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    CERN Document Server

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  14. Interlayer Simplified Depth Coding for Quality Scalability on 3D High Efficiency Video Coding

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available A quality scalable extension design is proposed for the upcoming 3D video on the emerging standard for High Efficiency Video Coding (HEVC. A novel interlayer simplified depth coding (SDC prediction tool is added to reduce the amount of bits for depth maps representation by exploiting the correlation between coding layers. To further improve the coding performance, the coded prediction quadtree and texture data from corresponding SDC-coded blocks in the base layer can be used in interlayer simplified depth coding. In the proposed design, the multiloop decoder solution is also extended into the proposed scalable scenario for texture views and depth maps, and will be achieved by the interlayer texture prediction method. The experimental results indicate that the average Bjøntegaard Delta bitrate decrease of 54.4% can be gained in interlayer simplified depth coding prediction tool on multiloop decoder solution compared with simulcast. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  15. Gravito-Turbulent Disks in 3D: Turbulent Velocities vs. Depth

    CERN Document Server

    Shi, Ji-Ming

    2014-01-01

    Characterizing turbulence in protoplanetary disks is crucial for understanding how they accrete and spawn planets. Recent measurements of spectral line broadening promise to diagnose turbulence, with different lines probing different depths. We use 3D local hydrodynamic simulations of cooling, self-gravitating disks to resolve how motions driven by "gravito-turbulence" vary with height. We find that gravito-turbulence is practically as vigorous at altitude as at depth: even though gas at altitude is much too rarefied to be itself self-gravitating, it is strongly forced by self-gravitating overdensities at the midplane. The long-range nature of gravity means that turbulent velocities are nearly uniform vertically, increasing by just a factor of 2 from midplane to surface, even as the density ranges over nearly three orders of magnitude. The insensitivity of gravito-turbulence to height contrasts with the behavior of disks afflicted by the magnetorotational instability (MRI); in the latter case, non-circular ve...

  16. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    Science.gov (United States)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  17. Constructing Isosurfaces from 3D Data Sets Taking Account of Depth Sorting of Polyhedra

    Institute of Scientific and Technical Information of China (English)

    周勇; 唐泽圣

    1994-01-01

    Creating and rendering intermediate geometric primitives is one of the approaches to visualisze data sets in 3D space.Some algorithms have been developed to construct isosurface from uniformly distributed 3D data sets.These algorithms assume that the function value varies linearly along edges of each cell.But to irregular 3D data sets,this assumption is inapplicable.Moreover,the detth sorting of cells is more complicated for irregular data sets,which is indispensable for generating isosurface images or semitransparent isosurface images,if Z-buffer method is not adopted.In this paper,isosurface models based on the assumption that the function value has nonlinear distribution within a tetrahedron are proposed.The depth sorting algorithm and data structures are developed for the irregular data sets in which cells may be subdivided into tetrahedra.The implementation issues of this algorithm are discussed and experimental results are shown to illustrate potentials of this technique.

  18. Neural Network Based Reconstruction of a 3D Object from a 2D Wireframe

    CERN Document Server

    Johnson, Kyle; Lipson, Hod

    2010-01-01

    We propose a new approach for constructing a 3D representation from a 2D wireframe drawing. A drawing is simply a parallel projection of a 3D object onto a 2D surface; humans are able to recreate mental 3D models from 2D representations very easily, yet the process is very difficult to emulate computationally. We hypothesize that our ability to perform this construction relies on the angles in the 2D scene, among other geometric properties. Being able to reproduce this reconstruction process automatically would allow for efficient and robust 3D sketch interfaces. Our research focuses on the relationship between 2D geometry observable in the sketch and 3D geometry derived from a potential 3D construction. We present a fully automated system that constructs 3D representations from 2D wireframes using a neural network in conjunction with a genetic search algorithm.

  19. Temporal-spatial modeling of fast-moving and deforming 3D objects

    Science.gov (United States)

    Wu, Xiaoliang; Wei, Youzhi

    1998-09-01

    This paper gives a brief description of the method and techniques developed for the modeling and reconstruction of fast moving and deforming 3D objects. A new approach using close-range digital terrestrial photogrammetry in conjunction with high speed photography and videography is proposed. A sequential image matching method (SIM) has been developed to automatically process pairs of images taken continuously of any fast moving and deforming 3D objects. Using the SIM technique a temporal-spatial model (TSM) of any fast moving and deforming 3D objects can be developed. The TSM would include a series of reconstructed surface models of the fast moving and deforming 3D object in the form of 3D images. The TSM allows the 3D objects to be visualized and analyzed in sequence. The SIM method, specifically the left-right matching and forward-back matching techniques are presented in the paper. An example is given which deals with the monitoring of a typical blast rock bench in a major open pit mine in Australia. With the SIM approach and the TSM model it is possible to automatically and efficiently reconstruct the 3D images of the blasting process. This reconstruction would otherwise be impossible to achieve using a labor intensive manual processing approach based on 2D images taken from conventional high speed cameras. The case study demonstrates the potential of the SIM approach and the TSM for the automatic identification, tracking and reconstruction of any fast moving and deforming 3D targets.

  20. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    OpenAIRE

    J. Javier Yebes; Bergasa, Luis M.; Miguel García-Garrido

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban sce...

  1. An efficient 3D traveltime calculation using coarse-grid mesh for shallow-depth source

    Science.gov (United States)

    Son, Woohyun; Pyun, Sukjoon; Lee, Ho-Young; Koo, Nam-Hyung; Shin, Changsoo

    2016-10-01

    3D Kirchhoff pre-stack depth migration requires an efficient algorithm to compute first-arrival traveltimes. In this paper, we exploited a wave-equation-based traveltime calculation algorithm, which is called the suppressed wave equation estimation of traveltime (SWEET), and the equivalent source distribution (ESD) algorithm. The motivation of using the SWEET algorithm is to solve the Laplace-domain wave equation using coarse grid spacing to calculate first-arrival traveltimes. However, if a real source is located at shallow-depth close to free surface, we cannot accurately calculate the wavefield using coarse grid spacing. So, we need an additional algorithm to correctly simulate the shallow source even for the coarse grid mesh. The ESD algorithm is a method to define a set of distributed nodal sources that approximate a point source at the inter-nodal location in a velocity model with large grid spacing. Thanks to the ESD algorithm, we can efficiently calculate the first-arrival traveltimes of waves emitted from shallow source point even when we solve the Laplace-domain wave equation using a coarse-grid mesh. The proposed algorithm is applied to the SEG/EAGE 3D salt model. From the result, we note that the combination of SWEET and ESD algorithms can be successfully used for the traveltime calculation under the condition of a shallow-depth source. We also confirmed that our algorithm using coarse-grid mesh requires less computational time than the conventional SWEET algorithm using relatively fine-grid mesh.

  2. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    Science.gov (United States)

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  3. Visualization of the ROOT 3D class objects with OpenInventor-like viewers

    International Nuclear Information System (INIS)

    The class library for conversion of the ROOT 3D class objects to the .iv format for 3D image viewers is described in this paper. At present the library was tested using the STAR and ATLAS detector geometry without any changes and revision for concrete detector

  4. 3D reconstruction in PET cameras with irregular sampling and depth of interaction

    International Nuclear Information System (INIS)

    We present 3D reconstruction algorithms that address fully 3D tomographic reconstruction using septa-less, stationary, and rectangular cameras. The field of view (FOV) encompasses the entire volume enclosed by detector modules capable of measuring depth of interaction (DOI). The Filtered Backprojection based algorithms incorporate DOI, accommodate irregular sampling, and minimize interpolation in the data by defining lines of response between the measured interaction points. We use fixed-width, evenly spaced radial bins in order to use the FFT, but use irregular angular sampling to minimize the number of unnormalizable zero efficiency sinogram bins. To address persisting low efficiency bins, we perform 2D nearest neighbor radial smoothing, employ a semi-iterative procedure to estimate the unsampled data, and mash the ''in plane'' and the first oblique projections to reconstruct the 2D image in the 3DRP algorithm. We present artifact free, essentially spatially isotropic images of Monte Carlo data with FWHM resolutions o 1.50 mm. 2.25 mm, and 3.00 mm at the center, in the bulk, and at the edges and corners of the FOV respectively

  5. Novel 3-D Object Recognition Methodology Employing a Curvature-Based Histogram

    Directory of Open Access Journals (Sweden)

    Liang-Chia Chen

    2013-07-01

    Full Text Available In this paper, a new object recognition algorithm employing a curvature-based histogram is presented. Recognition of three-dimensional (3-D objects using range images remains one of the most challenging problems in 3-D computer vision due to its noisy and cluttered scene characteristics. The key breakthroughs for this problem mainly lie in defining unique features that distinguish the similarity among various 3-D objects. In our approach, an object detection scheme is developed to identify targets underlining an automated search in the range images using an initial process of object segmentation to subdivide all possible objects in the scenes and then applying a process of object recognition based on geometric constraints and a curvature-based histogram for object recognition. The developed method has been verified through experimental tests for its feasibility confirmation.

  6. Superquadric Similarity Measure with Spherical Harmonics in 3D Object Recognition

    Institute of Scientific and Technical Information of China (English)

    XINGWeiwei; LIUWeibin; YUANBaozong

    2005-01-01

    This paper proposes a novel approach for superquadric similarity measure in 3D object recognition. The 3D objects are represented by a composite volumetric representation of Superquadric (SQ)-based geons, which are the new and powerful volumetric models adequate for 3D recognition. The proposed approach is processed through three stages: first, a novel sampling algorithm is designed for searching Chebyshev nodes on superquadric surface to construct the discrete spherical function representing superquadric 3D shape; secondly, the fast Spherical Harmonic Transform is performed on the discrete spherical function to obtain the rotation invariant descriptor of superquadric; thirdly, the similarity of superquadrics is measured by computing the L2 difference between two obtained descriptors. In addition, an integrated processing framework is presented for 3D object recognition with SQ-based geons from the real 3D data, which implements the approach proposed in this paper for shape similarity measure between SQ-based geons. Evaluation experiments demonstrate that the proposed approach is very efficient and robust for similarity measure of superquadric models. The research lays a foundation for developing SQ-based 3D object recognition systems.

  7. Intuitiveness 3D objects Interaction in Augmented Reality Using S-PI Algorithm

    Directory of Open Access Journals (Sweden)

    Ajune Wanis Ismail

    2013-07-01

    Full Text Available Numbers of researchers have developed interaction techniques in Augmented Reality (AR application. Some of them proposed new technique for user interaction with different types of interfaces which could bring great promise for intuitive user interaction with 3D data naturally. This paper will explore the 3D object manipulation performs in single-point interaction (S-PI technique in AR environment. The new interaction algorithm, S-PI technique, is point-based intersection designed to detect the interaction’s behaviors such as translate, rotate, clone and for intuitive 3D object handling. The S-PI technique is proposed with marker-based tracking in order to improve the trade-off between the accuracy and speed in manipulating 3D object in real-time. The method is robust required to ensure both elements of real and virtual can be combined relative to the user’s viewpoints and reduce system lag.  

  8. The role of the foreshortening cue in the perception of 3D object slant.

    Science.gov (United States)

    Ivanov, Iliya V; Kramer, Daniel J; Mullen, Kathy T

    2014-01-01

    Slant is the degree to which a surface recedes or slopes away from the observer about the horizontal axis. The perception of surface slant may be derived from static monocular cues, including linear perspective and foreshortening, applied to single shapes or to multi-element textures. It is still unclear the extent to which color vision can use these cues to determine slant in the absence of achromatic contrast. Although previous demonstrations have shown that some pictures and images may lose their depth when presented at isoluminance, this has not been tested systematically using stimuli within the spatio-temporal passband of color vision. Here we test whether the foreshortening cue from surface compression (change in the ratio of width to length) can induce slant perception for single shapes for both color and luminance vision. We use radial frequency patterns with narrowband spatio-temporal properties. In the first experiment, both a manual task (lever rotation) and a visual task (line rotation) are used as metrics to measure the perception of slant for achromatic, red-green isoluminant and S-cone isolating stimuli. In the second experiment, we measure slant discrimination thresholds as a function of depicted slant in a 2AFC paradigm and find similar thresholds for chromatic and achromatic stimuli. We conclude that both color and luminance vision can use the foreshortening of a single surface to perceive slant, with performances similar to those obtained using other strong cues for slant, such as texture. This has implications for the role of color in monocular 3D vision, and the cortical organization used in 3D object perception. PMID:24216007

  9. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric en-gineering geology,a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then,according to the classified thought of the object-oriented technique,the different 3D models of geological and engi-neering objects were realized based on the data structure,including terrain class,strata class,fault class,and limit class;and the modeling mechanism was alterna-tive. Finally,the 3D integrated model was established by Boolean operations be-tween 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification,the arbitrary slicing analysis of the 3D model,the geological analysis of the dam,and underground engineering. They provide powerful theoretical prin-ciples and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  10. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    Institute of Scientific and Technical Information of China (English)

    ZHONG DengHua; LI MingChao; LIU Jie

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric engineering geology, a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then, according to the classified thought of the object-oriented technique, the different 3D models of geological and engineering objects were realized based on the data structure, including terrain class,strata class, fault class, and limit class; and the modeling mechanism was alternative. Finally, the 3D integrated model was established by Boolean operations between 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification, the arbitrary slicing analysis of the 3D model, the geological analysis of the dam, and underground engineering. They provide powerful theoretical principles and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  11. Liquid Phase 3D Printing for Quickly Manufacturing Metal Objects with Low Melting Point Alloy Ink

    CERN Document Server

    Wang, Lei

    2014-01-01

    Conventional 3D printings are generally time-consuming and printable metal inks are rather limited. From an alternative way, we proposed a liquid phase 3D printing for quickly making metal objects. Through introducing metal alloys whose melting point is slightly above room temperature as printing inks, several representative structures spanning from one, two and three dimension to more complex patterns were demonstrated to be quickly fabricated. Compared with the air cooling in a conventional 3D printing, the liquid-phase-manufacturing offers a much higher cooling rate and thus significantly improves the speed in fabricating metal objects. This unique strategy also efficiently prevents the liquid metal inks from air oxidation which is hard to avoid otherwise in an ordinary 3D printing. Several key physical factors (like properties of the cooling fluid, injection speed and needle diameter, types and properties of the printing ink, etc.) were disclosed which would evidently affect the printing quality. In addit...

  12. The Object Projection Feature Estimation Problem in Unsupervised Markerless 3D Motion Tracking

    CERN Document Server

    Quesada, Luis

    2011-01-01

    3D motion tracking is a critical task in many computer vision applications. Existing 3D motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on 3D motion tracking. 3D motion tracking systems that require no knowledge on the target object and run on a single low-budget camera require estimations of the object projection features (namely, area and position). In this paper, we define the object projection feature estimation problem and we present a novel 3D motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera, as installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a non-modeled unmarked object that may be non-rigid, non-convex, partially occluded, self occluded, or motion blurred, given that it is opaque, evenly colored, and enough contrasting with t...

  13. Web based Interactive 3D Learning Objects for Learning Management Systems

    OpenAIRE

    Stefan Hesse; Stefan Gumhold

    2012-01-01

    In this paper, we present an approach to create and integrate interactive 3D learning objects of high quality for higher education into a learning management system. The use of these resources allows to visualize topics, such as electro-technical and physical processes in the interior of complex devices. This paper addresses the challenge of combining rich interactivity and adequate realism with 3D exercise material for distance elearning.

  14. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    OpenAIRE

    Koeva, M. N.

    2016-01-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interact...

  15. 3D photography in the objective analysis of volume augmentation including fat augmentation and dermal fillers.

    Science.gov (United States)

    Meier, Jason D; Glasgold, Robert A; Glasgold, Mark J

    2011-11-01

    The authors present quantitative and objective 3D data from their studies showing long-term results with facial volume augmentation. The first study analyzes fat grafting of the midface and the second study presents augmentation of the tear trough with hyaluronic filler. Surgeons using 3D quantitative analysis can learn the duration of results and the optimal amount to inject, as well as showing patients results that are not demonstrable with standard, 2D photography. PMID:22004863

  16. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe.

    OpenAIRE

    Harris, EJ; Miller, NR; Bamber, JC; Symonds-Tayler, JR; Evans, PM

    2011-01-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevatio...

  17. 3D Imaging of Dielectric Objects Buried under a Rough Surface by Using CSI

    Directory of Open Access Journals (Sweden)

    Evrim Tetik

    2015-01-01

    Full Text Available A 3D scalar electromagnetic imaging of dielectric objects buried under a rough surface is presented. The problem has been treated as a 3D scalar problem for computational simplicity as a first step to the 3D vector problem. The complexity of the background in which the object is buried is simplified by obtaining Green’s function of its background, which consists of two homogeneous half-spaces, and a rough interface between them, by using Buried Object Approach (BOA. Green’s function of the two-part space with planar interface is obtained to be used in the process. Reconstruction of the location, shape, and constitutive parameters of the objects is achieved by Contrast Source Inversion (CSI method with conjugate gradient. The scattered field data that is used in the inverse problem is obtained via both Method of Moments (MoM and Comsol Multiphysics pressure acoustics model.

  18. 3D Projection on Physical Objects: Design Insights from Five Real Life Cases

    DEFF Research Database (Denmark)

    Dalsgaard, Peter; Halskov, Kim

    2011-01-01

    have developed installations that employ 3D projection on physical objects. The installations have been developed in collaboration with external partners and have been put into use in real-life settings such as museums, exhibitions and interaction design laboratories. On the basis of these cases, we......3D projection on physical objects is a particular kind of Augmented Reality that augments a physical object by projecting digital content directly onto it, rather than by using a mediating device, such as a mobile phone or a head- mounted display. In this paper, we present five cases in which we...

  19. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration

    NARCIS (Netherlands)

    Meijer, Frank; Broek, van den Egon L.

    2010-01-01

    We investigated individual differences in interactively exploring previous term3D virtual objects.next term 36 participants explored 24 simple and 24 difficult previous objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their previous term3Dnex

  20. Ultrasonic cleaning of 3D printed objects and Cleaning Challenge Devices

    NARCIS (Netherlands)

    Verhaagen, Bram; Zanderink, Thijs; Fernandez Rivas, David

    2016-01-01

    We report our experiences in the evaluation of ultrasonic cleaning processes of objects made with additive manufacturing techniques, specifically three-dimensional (3D) printers. These objects need to be cleaned of support material added during the printing process. The support material can be remov

  1. Assessing nest-building behavior of mice using a 3D depth camera.

    Science.gov (United States)

    Okayama, Tsuyoshi; Goto, Tatsuhiko; Toyoda, Atsushi

    2015-08-15

    We developed a novel method to evaluate the nest-building behavior of mice using an inexpensive depth camera. The depth camera clearly captured nest-building behavior. Using three-dimensional information from the depth camera, we obtained objective features for assessing nest-building behavior, including "volume," "radius," and "mean height". The "volume" represents the change in volume of the nesting material, a pressed cotton square that a mouse shreds and untangles in order to build its nest. During the nest-building process, the total volume of cotton fragments is increased. The "radius" refers to the radius of the circle enclosing the fragments of cotton. It describes the extent of nesting material dispersion. The "radius" averaged approximately 60mm when a nest was built. The "mean height" represents the change in the mean height of objects. If the nest walls were high, the "mean height" was also high. These features provided us with useful information for assessment of nest-building behavior, similar to conventional methods for the assessment of nest building. However, using the novel method, we found that JF1 mice built nests with higher walls than B6 mice, and B6 mice built nests faster than JF1 mice. Thus, our novel method can evaluate the differences in nest-building behavior that cannot be detected or quantified by conventional methods. In future studies, we will evaluate nest-building behaviors of genetically modified, as well as several inbred, strains of mice, with several nesting materials.

  2. Autonomous 3D Modeling of Unknown Objects for Active Scene Exploration

    OpenAIRE

    Kriegel, Simon

    2015-01-01

    The thesis Autonomous 3D Modeling of Unknown Objects for Active Scene Exploration presents an approach for efficient model generation of small-scale objects applying a robot-sensor system. Active scene exploration incorporates object recognition methods for analyzing a scene of partially known objects as well as exploration approaches for autonomous modeling of unknown parts. Here, recognition, exploration, and planning methods are extended and combined in a single scene exploration system, e...

  3. Three-dimensional object recognition using gradient descent and the universal 3-D array grammar

    Science.gov (United States)

    Baird, Leemon C., III; Wang, Patrick S. P.

    1992-02-01

    A new algorithm is presented for applying Marill's minimum standard deviation of angles (MSDA) principle for interpreting line drawings without models. Even though no explicit models or additional heuristics are included, the algorithm tends to reach the same 3-D interpretations of 2-D line drawings that humans do. Marill's original algorithm repeatedly generated a set of interpretations and chose the one with the lowest standard deviation of angles (SDA). The algorithm presented here explicitly calculates the partial derivatives of SDA with respect to all adjustable parameters, and follows this gradient to minimize SDA. For a picture with lines meeting at m points forming n angles, the gradient descent algorithm requires O(n) time to adjust all the points, while the original algorithm required O(mn) time to do so. For the pictures described by Marill, this gradient descent algorithm running on a Macintosh II was found to be one to two orders of magnitude faster than the original algorithm running on a Symbolics, while still giving comparable results. Once the 3-D interpretation of the line drawing has been found, the 3-D object can be reduced to a description string using the Universal 3-D Array Grammar. This is a general grammar which allows any connected object represented as a 3-D array of pixels to be reduced to a description string. The algorithm based on this grammar is well suited to parallel computation, and could run efficiently on parallel hardware. This paper describes both the MSDA gradient descent algorithm and the Universal 3-D Array Grammar algorithm. Together, they transform a 2-D line drawing represented as a list of line segments into a string describing the 3-D object pictured. The strings could then be used for object recognition, learning, or storage for later manipulation.

  4. Generation of geometric representations of 3D objects in CAD/CAM by digital photogrammetry

    Science.gov (United States)

    Li, Rongxing

    This paper presents a method for the generation of geometric representations of 3D objects by digital photogrammetry. In CAD/CAM systems geometric modelers are usually used to create three-dimensional (3D) geometric representations for design and manufacturing purposes. However, in cases where geometric information such as dimensions and shapes of objects are not available, measurements of physically existing objects become necessary. In this paper, geometric parameters of primitives of 3D geometric representations such as Boundary Representation (B-rep), Constructive Solid Geometry (CSG), and digital surface models are determined by digital image matching techniques. An algorithm for reconstruction of surfaces with discontinuities is developed. Interfaces between digital photogrammetric data and these geometric representations are realized. This method can be applied to design and manufacturing in mechanical engineering, automobile industry, robot technology, spatial information systems and others.

  5. High-purity 3D nano-objects grown by focused-electron-beam induced deposition.

    Science.gov (United States)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices. PMID:27454835

  6. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    Science.gov (United States)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ˜50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  7. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    Science.gov (United States)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core–shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  8. A convolutional learning system for object classification in 3-D Lidar data.

    Science.gov (United States)

    Prokhorov, Danil

    2010-05-01

    In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.

  9. 3D-Web-GIS RFID location sensing system for construction objects.

    Science.gov (United States)

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  10. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    International Nuclear Information System (INIS)

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0–35 mm s−1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s−1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue

  11. Extracting Superquadric-based Geon Description for 3D Object Recognition

    Institute of Scientific and Technical Information of China (English)

    XINGWeiwei; LIUWeibin; YUANBaozong

    2005-01-01

    Geons recognition is one key issue in developing 3D object recognition system based on Recognition by components (RBC) theory. In this paper, we present a novel approach for extracting superquadric-based geon description of 3D volumetric primitives from real shape data, which integrates the advantages of deformable superquadric models reconstruction and SVM-based classification. First, Real-coded genetic algorithm (RCGA) is used for superquadric fitting to 3D data and the quantitative parametric information is obtained; then a new sophisticated feature set is derived from superquadric parameters obtained for the next step; and SVM-based classification is proposed and implemented for geons recognition and the qualitative geometric information is obtained. Furthermore, the knowledge-based feedback of SVM network is introduced for improving the classification performance. Ex-perimental results obtained show that our approach is efficient and precise for extracting superquadric-based geon description from real shape data in 3D object recognition. The results are very encouraging and have significant benefit for developing the general 3D object recognition system.

  12. Digital Curvatures Applied to 3D Object Analysis and Recognition: A Case Study

    CERN Document Server

    Chen, Li

    2009-01-01

    In this paper, we propose using curvatures in digital space for 3D object analysis and recognition. Since direct adjacency has only six types of digital surface points in local configurations, it is easy to determine and classify the discrete curvatures for every point on the boundary of a 3D object. Unlike the boundary simplicial decomposition (triangulation), the curvature can take any real value. It sometimes makes difficulties to find a right value for threshold. This paper focuses on the global properties of categorizing curvatures for small regions. We use both digital Gaussian curvatures and digital mean curvatures to 3D shapes. This paper proposes a multi-scale method for 3D object analysis and a vector method for 3D similarity classification. We use these methods for face recognition and shape classification. We have found that the Gaussian curvatures mainly describe the global features and average characteristics such as the five regions of a human face. However, mean curvatures can be used to find ...

  13. 3D high-efficiency video coding for multi-view video and depth data.

    Science.gov (United States)

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  14. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  15. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Science.gov (United States)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  16. Real-time patch sweeping for high-quality depth estimation in 3D video conferencing applications

    Science.gov (United States)

    Waizenegger, Wolfgang; Feldmann, Ingo; Schreer, Oliver

    2011-03-01

    In future 3D videoconferencing systems, depth estimation is required to support autostereoscopic displays and even more important, to provide eye contact. Real-time 3D video processing is currently possible, but within some limits. Since traditional CPU centred sub-pixel disparity estimation is computationally expensive, the depth resolution of fast stereo approaches is directly linked to pixel quantization and the selected stereo baseline. In this work we present a novel, highly parallelizable algorithm that is capable of dealing with arbitrary depth resolutions while avoiding texture interpolation related runtime penalties by application of GPU centred design. The cornerstone of our patch sweeping approach is the fusion of space sweeping and patch based 3D estimation techniques. Especially for narrow baseline multi-camera configurations, as commonly used for 3D videoconferencing systems (e.g. [1]), it preserves the strengths of both techniques and avoid their shortcomings at the same time. Moreover, we provide a sophisticated parameterization and quantization scheme that establishes a very good scalability of our algorithm in terms of computation time and depth estimation quality. Furthermore, we present an optimized CUDA implementation for a multi GPU setup in a cluster environment. For each GPU, it performs three pair wise high quality depth estimations for a trifocal narrow baseline camera configuration on a 256x256 image block within real-time.

  17. Local shape feature fusion for improved matching, pose estimation and 3D object recognition

    DEFF Research Database (Denmark)

    Buch, Anders Glent; Petersen, Henrik Gordon; Krüger, Norbert

    2016-01-01

    We provide new insights to the problem of shape feature description and matching, techniques that are often applied within 3D object recognition pipelines. We subject several state of the art features to systematic evaluations based on multiple datasets from different sources in a uniform manner....

  18. Vision-Guided Robot Control for 3D Object Recognition and Manipulation

    OpenAIRE

    S. Q. Xie; Haemmerle, E.; Cheng, Y; Gamage, P

    2008-01-01

    Research into a fully automated vision-guided robot for identifying, visualising and manipulating 3D objects with complicated shapes is still undergoing major development world wide. The current trend is toward the development of more robust, intelligent and flexible vision-guided robot systems to operate in highly dynamic environments. The theoretical basis of image plane dynamics and robust image-based robot systems capable of manipulating moving objects still need further research. Researc...

  19. Steady-state particle tracking in the object-oriented regional groundwater model ZOOMQ3D

    OpenAIRE

    Jackson, C.R.

    2002-01-01

    This report describes the development of a steady-state particle tracking code for use in conjunction with the object-oriented regional groundwater flow model, ZOOMQ3D (Jackson, 2001). Like the flow model, the particle tracking software, ZOOPT, is written using an object-oriented approach to promote its extensibility and flexibility. ZOOPT enables the definition of steady-state pathlines in three dimensions. Particles can be tracked in both the forward and reverse directions en...

  20. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  1. Depth migration and de-migration for 3-D migration velocity analysis; Migration profondeur et demigration pour l'analyse de vitesse de migration 3D

    Energy Technology Data Exchange (ETDEWEB)

    Assouline, F.

    2001-07-01

    3-D seismic imaging of complex geologic structures requires the use of pre-stack imaging techniques, the post-stack ones being unsuitable in that case. Indeed, pre-stack depth migration is a technique which allows to image accurately complex structures provided that we have at our disposal a subsurface velocity model accurate enough. The determination of this velocity model is thus a key element for seismic imaging, and to this end, migration velocity analysis methods have met considerable interest. The SMART method is a specific migration velocity analysis method: the singularity of this method is that it does not rely on any restrictive assumptions on the complexity of the velocity model to determine. The SMART method uses a detour through the pre-stack depth migrated domain for extracting multi-offset kinematic information hardly accessible in the time domain. Once achieved the interpretation of the pre-stack depth migrated seismic data, a kinematic de-migration technique of the interpreted events enables to obtain a consistent kinematic database (i.e. reflection travel-times). Then, the inversion of these travel-times, by means of reflection tomography, allows the determination of an accurate velocity model. To be able to really image geologic structures for which the 3-D feature is predominant, we have studied the implementation of migration velocity analysis in 3-D in the context of the SMART method, and more generally, we have developed techniques allowing to overcome the intrinsic difficulties in the 3-D aspects of seismic imaging. Indeed, although formally the SMART method can be directly applied to the case of 3-D complex structures, the feasibility of its implementation requires to choose well the imaging domain. Once this choice done, it is also necessary to conceive a method allowing, via the associated de-migration, to obtain the reflection travel-times. We first consider the offset domain which constitutes, still today, the strategy most usually used

  2. On 3D simulation of moving objects in a digital earth system

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    "How do the rescue helicopters find out an optimized path to arrive at the site of a disaster as soon as possible?" or "How are the flight procedures over mountains and plateaus simulated?" and so on.In this paper a script language on spatial moving objects is presented by abstracting 3D spatial moving objects’ behavior when implementing moving objects simulation in 3D digital Earth scene,which is based on a platform of digital China named "ChinaStar".The definition of this script language,its morphology and syntax,its compiling and mediate language generating,and the behavior and state control of spatial moving objects are discussed emphatically.In addition,the language’s applications and implementation are also discussed.

  3. Full-viewpoint 3D Space Object Recognition Based on Kernel Locality Preserving Projections

    Institute of Scientific and Technical Information of China (English)

    Meng Gang; Jiang Zhiguo; Liu Zhengyi; Zhang Haopeng; Zhao Danpei

    2010-01-01

    Space object recognition plays an important role in spatial exploitation and surveillance,followed by two main problems:lacking of data and drastic changes in viewpoints.In this article,firstly,we build a three-dimensional (3D) satellites dataset named BUAA Satellite Image Dataset (BUAA-SID 1.0) to supply data for 3D space object research.Then,based on the dataset,we propose to recognize full-viewpoint 3D space objects based on kemel locality preserving projections (KLPP).To obtain more accurate and separable description of the objects,firstly,we build feature vectors employing moment invariants,Fourier descriptors,region covariance and histogram of oriented gradients.Then,we map the features into kernel space followed by dimensionality reduction using KLPP to obtain the submanifold of the features.At last,k-nearest neighbor (kNN) is used to accomplish the classification.Experimental results show that the proposed approach is more appropriate for space object recognition mainly considering changes of viewpoints.Encouraging recognition rate could be obtained based on images in BUAA-SID 1.0,and the highest recognition result could achieve 95.87%.

  4. Ankh in the depth - Subdermal 3D art implants: Radiological identification with body modification.

    Science.gov (United States)

    Schaerli, Sarah; Berger, Florian; Thali, Michael J; Gascho, Dominic

    2016-05-01

    One of the core tasks in forensic medico-legal investigations is the identification of the deceased. Radiological identification using postmortem computed tomography (PMCT) is a powerful technique. In general, the implementation of forensic PMCT is rising worldwide. In addition to specific anatomical structures, medical implants or prostheses serve as markers for the comparison of antemortem and postmortem images to identify the deceased. However, non-medical implants, such as subdermal three-dimensional (3D) art implants, also allow for radiological identification. These implants are a type of body modification that have become increasingly popular over the last several decades and will therefore be employed more frequently in radiological identification in the future. To the best of our knowledge, this is the first case of radiological identification with a subdermal 3D art implant. Further, the present case shows the characteristics of a silicone 3D art implant on computed tomography, magnetic resonance imaging and X-rays. PMID:27161914

  5. Registration of untypical 3D objects in Polish cadastre - do we need 3D cadastre? / Rejestracja nietypowych obiektów 3D w polskim katastrze - czy istnieje potrzeba wdrożenia katastru 3D?

    Science.gov (United States)

    Marcin, Karabin

    2012-11-01

    Polish cadastral system consists of two registers: cadastre and land register. The cadastre register data on cadastral objects (land, buildings and premises) in particular location (in a two-dimensional coordinate system) and their attributes as well as data about the owners. The land register contains data concerned ownerships and other rights to the property. Registration of a land parcel without spatial objects located on the surface is not problematic. Registration of buildings and premises in typical cases is not a problem either. The situation becomes more complicated in cases of multiple use of space above the parcel and with more complex construction of the buildings. The paper presents rules concerning the registration of various untypical 3D objects located within the city of Warsaw. The analysis of the data concerning those objects registered in the cadastre and land register is presented in the paper. And this is the next part of the author's detailed research. The aim of this paper is to answer the question if we really need 3D cadastre in Poland. Polski system katastralny składa się z dwóch rejestrów: ewidencji gruntów i budynków (katastru nieruchomosci) oraz ksiąg wieczystych. W ewidencji gruntów i budynków (katastrze nieruchomości) rejestrowane są dane o położeniu (w dwuwymiarowym układzie współrzędnych), atrybuty oraz dane o właścicielach obiektów katastralnych (działek, budynków i lokali), w księgach wieczystych oprócz danych właścicielskich, inne prawa do nieruchomości. Rejestracja działki bez obiektów przestrzennych położonych na jej powierzchni nie stanowi problemu. Także rejestracja budynków i lokali w typowych przypadkach nie stanowi trudności. Sytuacja staje się bardziej skomplikowana w przypadku wielokrotnego użytkowania przestrzeni powyzej lub poniżej powierzchni działki oraz w przypadku budynków o złożonej konstrukcji. W artykule przedstawiono zasady związane z rejestracją nietypowych obiektów 3

  6. Retrieval of 3D-Position af a Passive Object Using Infrared LED's and Photodiodes

    DEFF Research Database (Denmark)

    Christensen, Henrik Vie

    2005-01-01

    A sensor using infrared emitter/receiver pairs to determine the position of a passive object is presented. An array with a small number of infrared emitter/receiver pairs are proposed as sensing part to acquire information on the object position. The emitters illuminates the object and the intens...... experiments shows good accordance between actual and retrieved positions when tracking a ball. The ball has been successfully replaced by a human hand, and a "3D non-touch screen" with a human hand as "pointing device" is shown possible....

  7. Towards a Stable Robotic Object Manipulation Through 2D-3D Features Tracking

    Directory of Open Access Journals (Sweden)

    Sorin M. Grigorescu

    2013-04-01

    Full Text Available In this paper, a new object tracking system is proposed to improve the object manipulation capabilities of service robots. The goal is to continuously track the state of the visualized environment in order to send visual information in real time to the path planning and decision modules of the robot; that is, to adapt the movement of the robotic system according to the state variations appearing in the imaged scene. The tracking approach is based on a probabilistic collaborative tracking framework developed around a 2D patch‐based tracking system and a 2D‐3D point features tracker. The real‐time visual information is composed of RGB‐D data streams acquired from state‐of‐the‐art structured light sensors. For performance evaluation, the accuracy of the developed tracker is compared to a traditional marker‐based tracking system which delivers 3D information with respect to the position of the marker.

  8. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    CERN Document Server

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  9. Computation of Edge-Edge-Edge Events Based on Conicoid Theory for 3-D Object Recognition

    Institute of Scientific and Technical Information of China (English)

    WU Chenye; MA Huimin

    2009-01-01

    The availability of a good viewpoint space partition is crucial in three dimensional (3-D) object rec-ognition on the approach of aspect graph. There are two important events depicted by the aspect graph ap-proach, edge-edge-edge (EEE) events and edge-vertex (EV) events. This paper presents an algorithm to compute EEE events by characteristic analysis based on conicoid theory, in contrast to current algorithms that focus too much on EV events and often overlook the importance of EEE events. Also, the paper provides a standard flowchart for the viewpoint space partitioning based on aspect graph theory that makes it suitable for perspective models. The partitioning result best demonstrates the algorithm's efficiency with more valu-able viewpoints found with the help of EEE events, which can definitely help to achieve high recognition rate for 3-D object recognition.

  10. Improving object detection in 2D images using a 3D world model

    Science.gov (United States)

    Viggh, Herbert E. M.; Cho, Peter L.; Armstrong-Crews, Nicholas; Nam, Myra; Shah, Danelle C.; Brown, Geoffrey E.

    2014-05-01

    A mobile robot operating in a netcentric environment can utilize offboard resources on the network to improve its local perception. One such offboard resource is a world model built and maintained by other sensor systems. In this paper we present results from research into improving the performance of Deformable Parts Model object detection algorithms by using an offboard 3D world model. Experiments were run for detecting both people and cars in 2D photographs taken in an urban environment. After generating candidate object detections, a 3D world model built from airborne Light Detection and Ranging (LIDAR) and aerial photographs was used to filter out false alarm using several types of geometric reasoning. Comparison of the baseline detection performance to the performance after false alarm filtering showed a significant decrease in false alarms for a given probability of detection.

  11. Architectural Reconstruction of 3D Building Objects through Semantic Knowledge Management

    OpenAIRE

    Yucong, Duan; Cruz, Christophe; Nicolle, Christophe

    2010-01-01

    International audience This paper presents an ongoing research which aims at combining geometrical analysis of point clouds and semantic rules to detect 3D building objects. Firstly by applying a previous semantic formalization investigation, we propose a classification of related knowledge as definition, partial knowledge and ambiguous knowledge to facilitate the understanding and design. Secondly an empirical implementation is conducted on a simplified building prototype complying with t...

  12. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Directory of Open Access Journals (Sweden)

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  13. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  14. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    Science.gov (United States)

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  15. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  16. A methodology for 3D modeling and visualization of geological objects

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to organize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and it is a good way to solve the problem of integration and share of geological spatial data.

  17. A methodology for 3D modeling and visualization of geological objects

    Institute of Scientific and Technical Information of China (English)

    ZHANG LiQiang; TAN YuMin; KANG ZhiZhong; RUI XiaoPing; ZHAO YuanYuan; LIU Liu

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to or- ganize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and It is a good way to solve the problem of integration and share of geological spatial data.

  18. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    Directory of Open Access Journals (Sweden)

    Sungdae Sim

    2012-12-01

    Full Text Available Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  19. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    Science.gov (United States)

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  20. Object Extraction from Architecture Scenes through 3D Local Scanned Data Analysis

    Directory of Open Access Journals (Sweden)

    NING, X.

    2012-08-01

    Full Text Available Terrestrial laser scanning becomes a standard way for acquiring 3D data of complex outdoor objects. The processing of huge number of points and recognition of different objects inside become a new challenge, especially in the case where objects are included. In this paper, a new approach is proposed to classify objects through an analysis on shape information of the point cloud data. The scanned scene is constructed using k Nearest Neighboring (k-NN, and then similarity measurement between points is defined to cluster points with similar primitive shapes. Moreover, we introduce a combined geometrical criterion to refine the over-segmented results. To achieve more detail information, a residual based segmentation is adopted to refine the segmentation of architectural objects into more parts with different shape properties. Experimental results demonstrate that this approach can be used as a robust way to extract different objects in the scenes.

  1. Color and size interactions in a real 3D object similarity task.

    Science.gov (United States)

    Ling, Yazhu; Hurlbert, Anya

    2004-08-31

    In the natural world, objects are characterized by a variety of attributes, including color and shape. The contributions of these two attributes to object recognition are typically studied independently of each other, yet they are likely to interact in natural tasks. Here we examine whether color and size (a component of shape) interact in a real three-dimensional (3D) object similarity task, using solid domelike objects whose distinct apparent surface colors are independently controlled via spatially restricted illumination from a data projector hidden to the observer. The novel experimental setup preserves natural cues to 3D shape from shading, binocular disparity, motion parallax, and surface texture cues, while also providing the flexibility and ease of computer control. Observers performed three distinct tasks: two unimodal discrimination tasks, and an object similarity task. Depending on the task, the observer was instructed to select the indicated alternative object which was "bigger than," "the same color as," or "most similar to" the designated reference object, all of which varied in both size and color between trials. For both unimodal discrimination tasks, discrimination thresholds for the tested attribute (e.g., color) were increased by differences in the secondary attribute (e.g., size), although this effect was more robust in the color task. For the unimodal size-discrimination task, the strongest effects of the secondary attribute (color) occurred as a perceptual bias, which we call the "saturation-size effect": Objects with more saturated colors appear larger than objects with less saturated colors. In the object similarity task, discrimination thresholds for color or size differences were significantly larger than in the unimodal discrimination tasks. We conclude that color and size interact in determining object similarity, and are effectively analyzed on a coarser scale, due to noise in the similarity estimates of the individual attributes

  2. Indoor 3D Video Monitoring Using Multiple Kinect Depth-Cameras

    Directory of Open Access Journals (Sweden)

    M. Martínez-Zarzuela

    2014-02-01

    Full Text Available This article describes the design and development of a system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in selfcoding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to merge the skeletons detected locally by each Kinect conveniently, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  3. Distributed System for 3D Remote Monitoring Using KINECT Depth Cameras

    Directory of Open Access Journals (Sweden)

    M. Martinez-Zarzuela

    2014-01-01

    Full Text Available This article describes the design and development ofa system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in self-coding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to conveniently merge the skeletons detected locally by each Kinect, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  4. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    OpenAIRE

    Bagci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automati...

  5. A joint multi-view plus depth image coding scheme based on 3D-warping

    DEFF Research Database (Denmark)

    Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone;

    2011-01-01

    Free viewpoint video applications and autostereoscopic displays require the transmission of multiple views of a scene together with depth maps. Current compression and transmission solutions just handle these two data streams as separate entities. However, depth maps contain key information on th...... performances. Ad-hoc solutions for occluded areas are also provided. Experimental results show that the proposed joint texture-depth compression approach is able to outperform the state-of-the-art H.264 MVC coding standard performances at low bit rates....

  6. Multi-frequency color-marked fringe projection profilometry for fast 3D shape measurement of complex objects.

    Science.gov (United States)

    Jiang, Chao; Jia, Shuhai; Dong, Jun; Bao, Qingchen; Yang, Jia; Lian, Qin; Li, Dichen

    2015-09-21

    We propose a novel multi-frequency color-marked fringe projection profilometry approach to measure the 3D shape of objects with depth discontinuities. A digital micromirror device projector is used to project a color map consisting of a series of different-frequency color-marked fringe patterns onto the target object. We use a chromaticity curve to calculate the color change caused by the height of the object. The related algorithm to measure the height is also described in this paper. To improve the measurement accuracy, a chromaticity curve correction method is presented. This correction method greatly reduces the influence of color fluctuations and measurement error on the chromaticity curve and the calculation of the object height. The simulation and experimental results validate the utility of our method. Our method avoids the conventional phase shifting and unwrapping process, as well as the independent calculation of the object height required by existing techniques. Thus, it can be used to measure complex and dynamic objects with depth discontinuities. These advantages are particularly promising for industrial applications. PMID:26406621

  7. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  8. Volumetric Next-best-view Planning for 3D Object Reconstruction with Positioning Error

    Directory of Open Access Journals (Sweden)

    J. Irving Vasquez-Gomez

    2014-10-01

    Full Text Available Three-dimensional (3D object reconstruction is the process of building a 3D model of a real object. This task is performed by taking several scans of an object from different locations (views. Due to the limited field of view of the sensor and the object’s self-occlusions, it is a difficult problem to solve. In addition, sensor positioning by robots is not perfect, making the actual view different from the expected one. We propose a next best view (NBV algorithm that determines each view to reconstruct an arbitrary object. Furthermore, we propose a method to deal with the uncertainty in sensor positioning. The algorithm fulfills all the constraints of a reconstruction process, such as new information, positioning constraints, sensing constraints and registration constraints. Moreover, it improves the scan’s quality and reduces the navigation distance. The algorithm is based on a search-based paradigm where a set of candidate views is generated and then each candidate view is evaluated to determine which one is the best. To deal with positioning uncertainty, we propose a second stage which re-evaluates the views according to their neighbours, such that the best view is that which is within a region of the good views. The results of simulation and comparisons with previous approaches are presented.

  9. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    Science.gov (United States)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  10. Recognition of 3-D objects based on Markov random field models

    Institute of Scientific and Technical Information of China (English)

    HUANG Ying; DING Xiao-qing; WANG Sheng-jin

    2006-01-01

    The recognition of 3-D objects is quite a difficult task for computer vision systems.This paper presents a new object framework,which utilizes densely sampled grids with different resolutions to represent the local information of the input image.A Markov random field model is then created to model the geometric distribution of the object key nodes.Flexible matching,which aims to find the accurate correspondence map between the key points of two images,is performed by combining the local similarities and the geometric relations together using the highest confidence first method.Afterwards,a global similarity is calculated for object recognition. Experimental results on Coil-100 object database,which consists of 7 200 images of 100 objects,are presented.When the numbers of templates vary from 4,8,18 to 36 for each object,and the remaining images compose the test sets,the object recognition rates are 95.75 %,99.30 %,100.0 % and 100.0 %,respectively.The excellent recognition performance is much better than those of the other cited references,which indicates that our approach is well-suited for appearance-based object recognition.

  11. Adaptive Optics Concept For Multi-Objects 3D Spectroscopy on ELTs

    CERN Document Server

    Neichel, B; Puech, M; Conan, J M; Lelouarn, M; Gendron, E; Hammer, F; Rousset, G; Jagourel, P; Bouchet, P

    2005-01-01

    In this paper, we present a first comparison of different Adaptive Optics (AO) concepts to reach a given scientific specification for 3D spectroscopy on Extremely Large Telescope (ELT). We consider that a range of 30%-50% of Ensquarred Energy (EE) in H band (1.65um) and in an aperture size from 25 to 100mas is representative of the scientific requirements. From these preliminary choices, different kinds of AO concepts are investigated : Ground Layer Adaptive Optics (GLAO), Multi-Object AO (MOAO) and Laser Guide Stars AO (LGS). Using Fourier based simulations we study the performance of these AO systems depending on the telescope diameter.

  12. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    Science.gov (United States)

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  13. Polarizablity of 2D and 3D conducting objects using method of moments

    CERN Document Server

    Shahpari, Morteza; Lewis, Andrew

    2014-01-01

    Fundamental antenna limits of the gain-bandwidth product are derived from polarizability calculations. This electrostatic technique has significant value in many antenna evaluations. Polarizability is not available in closed form for most antenna shapes and no commercial electromagnetic packages have this facility. Numerical computation of the polarizability for arbitrary conducting bodies was undertaken using an unstructured triangular mesh over the surface of 2D and 3D objects. Numerical results compare favourably with analytical solutions and can be implemented efficiently for large structures of arbitrary shape.

  14. Prototyping a sensor enabled 3D citymodel on geospatial managed objects

    DEFF Research Database (Denmark)

    Kjems, Erik; Kolář, Jan

    2013-01-01

    resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional...... software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within...

  15. 3D Skeleton model derived from Kinect Depth Sensor Camera and its application to walking style quality evaluations

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-07-01

    Full Text Available Feature extraction for gait recognition has been created widely. The ancestor for this task is divided into two parts, model based and free-model based. Model-based approaches obtain a set of static or dynamic skeleton parameters via modeling or tracking body components such as limbs, legs, arms and thighs. Model-free approaches focus on shapes of silhouettes or the entire movement of physical bodies. Model-free approaches are insensitive to the quality of silhouettes. Its advantage is a low computational costs comparing to model-based approaches. However, they are usually not robust to viewpoints and scale. Imaging technology also developed quickly this decades. Motion capture (mocap device integrated with motion sensor has an expensive price and can only be owned by big animation studio. Fortunately now already existed Kinect camera equipped with depth sensor image in the market with very low price compare to any mocap device. Of course the accuracy not as good as the expensive one, but using some preprocessing we can remove the jittery and noisy in the 3D skeleton points. Our proposed method is part of model based feature extraction and we call it 3D Skeleton model. 3D skeleton model for extracting gait itself is a new model style considering all the previous model is using 2D skeleton model. The advantages itself is getting accurate coordinate of 3D point for each skeleton model rather than only 2D point. We use Kinect to get the depth data. We use Ipisoft mocap software to extract 3d skeleton model from Kinect video. From the experimental results shows 86.36% correctly classified instances using SVM.

  16. Calibration target reconstruction for 3-D vision inspection system of large-scale engineering objects

    Science.gov (United States)

    Yin, Yongkai; Peng, Xiang; Guan, Yingjian; Liu, Xiaoli; Li, Ameng

    2010-11-01

    It is usually difficult to calibrate the 3-D vision inspection system that may be employed to measure the large-scale engineering objects. One of the challenges is how to in-situ build-up a large and precise calibration target. In this paper, we present a calibration target reconstruction strategy to solve such a problem. First, we choose one of the engineering objects to be inspected as a calibration target, on which we paste coded marks on the object surface. Next, we locate and decode marks to get homologous points. From multiple camera images, the fundamental matrix between adjacent images can be estimated, and then the essential matrix can be derived with priori known camera intrinsic parameters and decomposed to obtain camera extrinsic parameters. Finally, we are able to obtain the initial 3D coordinates with binocular stereo vision reconstruction, and then optimize them with the bundle adjustment by considering the lens distortions, leading to a high-precision calibration target. This reconstruction strategy has been applied to the inspection of an industrial project, from which the proposed method is successfully validated.

  17. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    Science.gov (United States)

    Zhao, Xianling; Liu, Jiansheng; Zhang, Huayu; Wu, Yingchun

    2015-12-01

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000.

  18. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    International Nuclear Information System (INIS)

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000. (paper)

  19. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    Science.gov (United States)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used

  20. Lapse-time-dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    Science.gov (United States)

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-10-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: first, we evaluate the contribution of surface- and body-wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time-dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Second, we compare the lapse-time behaviour in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  1. Lapse-time dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    Science.gov (United States)

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-07-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: firstly, we evaluate the contribution of surface and body wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Secondly, we compare the lapse-time behavior in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  2. Thickness and clearance visualization based on distance field of 3D objects

    Directory of Open Access Journals (Sweden)

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  3. Fully integrated system-on-chip for pixel-based 3D depth and scene mapping

    Science.gov (United States)

    Popp, Martin; De Coi, Beat; Thalmann, Markus; Gancarz, Radoslav; Ferrat, Pascal; Dürmüller, Martin; Britt, Florian; Annese, Marco; Ledergerber, Markus; Catregn, Gion-Pol

    2012-03-01

    We present for the first time a fully integrated system-on-chip (SoC) for pixel-based 3D range detection suited for commercial applications. It is based on the time-of-flight (ToF) principle, i.e. measuring the phase difference of a reflected pulse train. The product epc600 is fabricated using a dedicated process flow, called Espros Photonic CMOS. This integration makes it possible to achieve a Quantum Efficiency (QE) of >80% in the full wavelength band from 520nm up to 900nm as well as very high timing precision in the sub-ns range which is needed for exact detection of the phase delay. The SoC features 8x8 pixels and includes all necessary sub-components such as ToF pixel array, voltage generation and regulation, non-volatile memory for configuration, LED driver for active illumination, digital SPI interface for easy communication, column based 12bit ADC converters, PLL and digital data processing with temporary data storage. The system can be operated at up to 100 frames per second.

  4. Active learning in the lecture theatre using 3D printed objects [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    David P. Smith

    2016-06-01

    Full Text Available The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student.

  5. Active learning in the lecture theatre using 3D printed objects.

    Science.gov (United States)

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  6. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    Directory of Open Access Journals (Sweden)

    Feng Xu

    Full Text Available Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies.

  7. An alternative 3D inversion method for magnetic anomalies with depth resolution

    Directory of Open Access Journals (Sweden)

    M. Chiappini

    2006-06-01

    Full Text Available This paper presents a new method to invert magnetic anomaly data in a variety of non-complex contexts when a priori information about the sources is not available. The region containing magnetic sources is discretized into a set of homogeneously magnetized rectangular prisms, polarized along a common direction. The magnetization distribution is calculated by solving an underdetermined linear system, and is accomplished through the simultaneous minimization of the norm of the solution and the misfit between the observed and the calculated field. Our algorithm makes use of a dipolar approximation to compute the magnetic field of the rectangular blocks. We show how this approximation, in conjunction with other correction factors, presents numerous advantages in terms of computing speed and depth resolution, and does not affect significantly the success of the inversion. The algorithm is tested on both synthetic and real magnetic datasets.

  8. 3D OBJECT COORDINATES EXTRACTION BY RADARGRAMMETRY AND MULTI STEP IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    A. Eftekhari

    2013-09-01

    Full Text Available Nowadays by high resolution SAR imaging systems as Radarsat-2, TerraSAR-X and COSMO-skyMed, three-dimensional terrain data extraction using SAR images is growing. InSAR and Radargrammetry are two most common approaches for removing 3D object coordinate from SAR images. Research has shown that extraction of terrain elevation data using satellite repeat pass interferometry SAR technique due to atmospheric factors and the lack of coherence between the images in areas with dense vegetation cover is a problematic. So the use of Radargrammetry technique can be effective. Generally height derived method by Radargrammetry consists of two stages: Images matching and space intersection. In this paper we propose a multi-stage algorithm founded on the combination of feature based and area based image matching. Then the RPCs that calculate for each images use for extracting 3D coordinate in matched points. At the end, the coordinates calculating that compare with coordinates extracted from 1 meters DEM. The results show root mean square errors for 360 points are 3.09 meters. We use a pair of spotlight TerraSAR-X images from JAM (IRAN in this article.

  9. Interactive Application Development Policy Object 3D Virtual Tour History Pacitan District based Multimedia

    Directory of Open Access Journals (Sweden)

    Bambang Eka Purnama

    2013-04-01

    Full Text Available Pacitan has a wide range of tourism activity. One of the tourism district is Pacitan historical attractions. These objects have a history tour of the educational values, history and culture, which must be maintained and preserved as one tourism asset Kabupeten Pacitan. But the history of the current tour the rarely visited and some of the students also do not understand the history of each of these historical attractions. Hence made a information media of 3D virtual interactive applications Pacitan tour history in the form of interactive CD applications. The purpose of the creation of interactive applications is to introduce Pacitan history tours to students and the community. Creating interactive information media that can provide an overview of the history of the existing tourist sites in Pacitan The benefits of this research is the students and the public will get to know the history of historical attractions Pacitan. As a media introduction of historical attractions and as a medium of information to preserve the historical sights. Band is used in the manufacturing methods Applications 3D Virtual Interactive Attractions: History-Based Multimedia Pacitan authors used the method library, observation and interviews. Design using 3ds Max 2010, Adobe Director 11.5, Adobe Photoshop CS3 and Corel Draw. The results of this research is the creation of media interakif information that can provide knowledge about the history of Pacitan.

  10. Electromagnetic 3D subsurface imaging with source sparsity for a synthetic object

    CERN Document Server

    Pursiainen, Sampsa

    2016-01-01

    This paper concerns electromagnetic 3D subsurface imaging in connection with sparsity of signal sources. We explored an imaging approach that can be implemented in situations that allow obtaining a large amount of data over a surface or a set of orbits but at the same time require sparsity of the signal sources. Characteristic to such a tomography scenario is that it necessitates the inversion technique to be genuinely three-dimensional: For example, slicing is not possible due to the low number of sources. Here, we primarily focused on astrophysical subsurface exploration purposes. As an example target of our numerical experiments we used a synthetic small planetary object containing three inclusions, e.g. voids, of the size of the wavelength. A tetrahedral arrangement of source positions was used, it being the simplest symmetric point configuration in 3D. Our results suggest that somewhat reliable inversion results can be produced within the present a priori assumptions, if the data can be recorded at a spe...

  11. An overview of 3D topology for LADM-based objects

    NARCIS (Netherlands)

    Zulkifli, N.A.; Rahman, A.A.; Van Oosterom, P.J.M.

    2015-01-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological mod

  12. WAVES GENERATED BY A 3D MOVING BODY IN A TWO-LAYER FLUID OF FINITE DEPTH

    Institute of Scientific and Technical Information of China (English)

    ZHU Wei; YOU Yun-xiang; MIAO Guo-ping; ZHAO Feng; ZHANG Jun

    2005-01-01

    This paper is concerned with the waves generated by a 3-D body advancing beneath the free surface with constant speed in a two-layer fluid of finite depth. By applying Green's theorem, a layered integral equation system based on the Rankine source for the perturbed velocity potential generated by the moving body was derived with the potential flow theory. A four-node isoparametric element method was used to treat with the solution of the layered integral equation system. The surface and interface waves generated by a moving ball were calculated numerically. The results were compared with the analytical results for a moving source with constant velocity.

  13. Simulating hydroplaning of submarine landslides by quasi 3D depth averaged finite element method

    Science.gov (United States)

    De Blasio, Fabio; Battista Crosta, Giovanni

    2014-05-01

    G.B. Crosta, H. J. Chen, and F.V. De Blasio Dept. Of Earth and Environmental Sciences, Università degli Studi di Milano Bicocca, Milano, Italy Klohn Crippen Berger, Calgary, Canada Subaqueous debris flows/submarine landslides, both in the open ocean as well as in fresh waters, exhibit extremely high mobility, quantified by a ratio between vertical to horizontal displacement of the order 0.01 or even much less. It is possible to simulate subaqueous debris flows with small-scale experiments along a flume or a pool using a cohesive mixture of clay and sand. The results have shown a strong enhancement of runout and velocity compared to the case in which the same debris flow travels without water, and have indicated hydroplaning as a possible explanation (Mohrig et al. 1998). Hydroplaning is started when the snout of the debris flow travels sufficiently fast. This generates lift forces on the front of the debris flow exceeding the self-weight of the sediment, which so begins to travel detached from the bed, literally hovering instead of flowing. Clearly, the resistance to flow plummets because drag stress against water is much smaller than the shear strength of the material. The consequence is a dramatic increase of the debris flow speed and runout. Does the process occur also for subaqueous landslides and debris flows in the ocean, something twelve orders of magnitude larger than the experimental ones? Obviously, no experiment will ever be capable to replicate this size, one needs to rely on numerical simulations. Results extending a depth-integrated numerical model for debris flows (Imran et al., 2001) indicate that hydroplaning is possible (De Blasio et al., 2004), but more should be done especially with alternative numerical methodologies. In this work, finite element methods are used to simulate hydroplaning using the code MADflow (Chen, 2014) adopting a depth averaged solution. We ran some simulations on the small scale of the laboratory experiments, and secondly

  14. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    CERN Document Server

    Bagci, Ulas; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate th...

  15. Ball-scale based hierarchical multi-object recognition in 3D medical images

    Science.gov (United States)

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  16. Extraction and classification of 3D objects from volumetric CT data

    Science.gov (United States)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  17. Depth-kymography of vocal fold vibrations: part II. Simulations and direct comparisons with 3D profile measurements

    Energy Technology Data Exchange (ETDEWEB)

    Mul, Frits F M de; George, Nibu A; Qiu Qingjun; Rakhorst, Gerhard; Schutte, Harm K [Department of Biomedical Engineering BMSA, Faculty of Medicine, University Medical Center Groningen UMCG, University of Groningen, PO Box 196, 9700 AD Groningen (Netherlands)], E-mail: ffm@demul.net

    2009-07-07

    We report novel direct quantitative comparisons between 3D profiling measurements and simulations of human vocal fold vibrations. Until now, in human vocal folds research, only imaging in a horizontal plane was possible. However, for the investigation of several diseases, depth information is needed, especially when the two folds act differently, e.g. in the case of tumour growth. Recently, with our novel depth-kymographic laryngoscope, we obtained calibrated data about the horizontal and vertical positions of the visible surface of the vibrating vocal folds. In order to find relations with physical parameters such as elasticity and damping constants, we numerically simulated the horizontal and vertical positions and movements of the human vocal folds while vibrating and investigated the effect of varying several parameters on the characteristics of the phonation: the masses and their dimensions, the respective forces and pressures, and the details of the vocal tract compartments. Direct one-to-one comparison with measured 3D positions presents-for the first time-a direct means of validation of these calculations. This may start a new field in vocal folds research.

  18. Disparity-defined objects moving in depth do not elicit three-dimensional shape constancy.

    Science.gov (United States)

    Scarfe, P; Hibbard, P B

    2006-05-01

    Observers generally fail to recover three-dimensional shape accurately from binocular disparity. Typically, depth is overestimated at near distances and underestimated at far distances [Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351-1360]. A simple prediction from this is that disparity-defined objects should appear to expand in depth when moving towards the observer, and compress in depth when moving away. However, additional information is provided when an object moves from which 3D Euclidean shape can be recovered, be this through the addition of structure from motion information [Richards, W. (1985). Structure from stereo and motion. Journal of the Optical Society of America A, 2, 343-349], or the use of non-generic strategies [Todd, J. T., & Norman, J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception and Psychophysics, 65, 31-47]. Here, we investigated shape constancy for objects moving in depth. We found that to be perceived as constant in shape, objects needed to contract in depth when moving toward the observer, and expand in depth when moving away, countering the effects of incorrect distance scaling (Johnston, 1991). This is a striking example of the failure of shape constancy, but one that is predicted if observers neither accurately estimate object distance in order to recover Euclidean shape, nor are able to base their responses on a simpler processing strategy.

  19. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3-D refractive index maps

    CERN Document Server

    Kim, Kyoohyun

    2016-01-01

    Optical trapping can be used to manipulate the three-dimensional (3-D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3-D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and the extensive computations. Here, we achieved the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3-D refractive index (RI) distribution of samples. Engineering the 3-D light field distribution of a trapping beam based on the measured 3-D RI map of samples generates a light mould, which can be used to manipulate colloidal and biological samples which have arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can ...

  20. 3D Micro-PIXE at atmospheric pressure: A new tool for the investigation of art and archaeological objects

    International Nuclear Information System (INIS)

    The paper describes a novel experiment characterized by the development of a confocal geometry in an external Micro-PIXE set-up. The position of X-ray optics in front of the X-ray detector and its proper alignment with respect to the proton micro-beam focus provided the possibility of carrying out 3D Micro-PIXE analysis. As a first application, depth intensity profiles of the major elements that compose the patina layer of a quaternary bronze alloy were measured. A simulation approach of the 3D Micro-PIXE data deduced elemental concentration profiles in rather good agreement with corresponding results obtained by electron probe micro-analysis from a cross-sectioned patina sample. With its non-destructive and depth-resolving properties, as well as its feasibility in atmospheric pressure, 3D Micro-PIXE seems especially suited for investigations in the field of cultural heritage

  1. Nanometer depth resolution in 3D topographic analysis of drug-loaded nanofibrous mats without sample preparation.

    Science.gov (United States)

    Paaver, Urve; Heinämäki, Jyrki; Kassamakov, Ivan; Hæggström, Edward; Ylitalo, Tuomo; Nolvi, Anton; Kozlova, Jekaterina; Laidmäe, Ivo; Kogermann, Karin; Veski, Peep

    2014-02-28

    We showed that scanning white light interferometry (SWLI) can provide nanometer depth resolution in 3D topographic analysis of electrospun drug-loaded nanofibrous mats without sample preparation. The method permits rapidly investigating geometric properties (e.g. fiber diameter, orientation and morphology) and surface topography of drug-loaded nanofibers and nanomats. Electrospun nanofibers of a model drug, piroxicam (PRX), and hydroxypropyl methylcellulose (HPMC) were imaged. Scanning electron microscopy (SEM) served as a reference method. SWLI 3D images featuring 29 nm by 29 nm active pixel size were obtained of a 55 μm × 40 μm area. The thickness of the drug-loaded non-woven nanomats was uniform, ranging from 2.0 μm to 3.0 μm (SWLI), and independent of the ratio between HPMC and PRX. The average diameters (n=100, SEM) for drug-loaded nanofibers were 387 ± 125 nm (HPMC and PRX 1:1), 407 ± 144 nm (HPMC and PRX 1:2), and 290 ± 100 nm (HPMC and PRX 1:4). We found advantages and limitations in both techniques. SWLI permits rapid non-contacting and non-destructive characterization of layer orientation, layer thickness, porosity, and surface morphology of electrospun drug-loaded nanofibers and nanomats. Such analysis is important because the surface topography affects the performance of nanomats in pharmaceutical and biomedical applications. PMID:24378328

  2. Ionized Outflows in 3-D Insights from Herbig-Haro Objects and Applications to Nearby AGN

    Science.gov (United States)

    Cecil, Gerald

    1999-01-01

    HST shows that the gas distributions of these objects are complex and clump at the limit of resolution. HST spectra have lumpy emission-line profiles, indicating unresolved sub-structure. The advantages of 3D over slits on gas so distributed are: robust flux estimates of various dynamical systems projected along lines of sight, sensitivity to fainter spectral lines that are physical diagnostics (reddening-gas density, T, excitation mechanisms, abundances), and improved prospects for recovery of unobserved dimensions of phase-space. These advantages al- low more confident modeling for more profound inquiry into underlying dynamics. The main complication is the effort required to link multi- frequency datasets that optimally track the energy flow through various phases of the ISM. This tedium has limited the number of objects that have been thoroughly analyzed to the a priori most spectacular systems. For HHO'S, proper-motions constrain the ambient B-field, shock velocity, gas abundances, mass-loss rates, source duty-cycle, and tie-ins with molecular flows. If the shock speed, hence ionization fraction, is indeed small then the ionized gas is a significant part of the flow energetics. For AGN'S, nuclear beaming is a source of ionization ambiguity. Establishing the energetics of the outflow is critical to determining how the accretion disk loses its energy. CXO will provide new constraints (especially spectral) on AGN outflows, and STIS UV-spectroscopy is also constraining cloud properties (although limited by extinction). HHO's show some of the things that we will find around AGN'S. I illustrate these points with results from ground-based and HST programs being pursued with collaborators.

  3. 3D Visualization System for Tracking and Identification of Objects Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Photon-X has developed a proprietary EO spatial phase technology that can passively collect 3-D images in real-time using a single camera-based system. This...

  4. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    OpenAIRE

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor v...

  5. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  6. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  7. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  8. Learning to Grasp Unknown Objects Based on 3D Edge Information

    DEFF Research Database (Denmark)

    Bodenhagen, Leon; Kraft, Dirk; Popovic, Mila;

    2010-01-01

    an offline or an online learning scheme. Both methods are implemented using a hybrid artificial neural network containing standard nodes with a sigmoid activation function and nodes with a radial basis function. We show that a significant performance improvement can be achieved.......In this work we refine an initial grasping behavior based on 3D edge information by learning. Based on a set of autonomously generated evaluated grasps and relations between the semi-global 3D edges, a prediction function is learned that computes a likelihood for the success of a grasp using either...

  9. A computational model that recovers the 3D shape of an object from a single 2D retinal representation.

    Science.gov (United States)

    Li, Yunfeng; Pizlo, Zygmunt; Steinman, Robert M

    2009-05-01

    Human beings perceive 3D shapes veridically, but the underlying mechanisms remain unknown. The problem of producing veridical shape percepts is computationally difficult because the 3D shapes have to be recovered from 2D retinal images. This paper describes a new model, based on a regularization approach, that does this very well. It uses a new simplicity principle composed of four shape constraints: viz., symmetry, planarity, maximum compactness and minimum surface. Maximum compactness and minimum surface have never been used before. The model was tested with random symmetrical polyhedra. It recovered their 3D shapes from a single randomly-chosen 2D image. Neither learning, nor depth perception, was required. The effectiveness of the maximum compactness and the minimum surface constraints were measured by how well the aspect ratio of the 3D shapes was recovered. These constraints were effective; they recovered the aspect ratio of the 3D shapes very well. Aspect ratios recovered by the model were compared to aspect ratios adjusted by four human observers. They also adjusted aspect ratios very well. In those rare cases, in which the human observers showed large errors in adjusted aspect ratios, their errors were very similar to the errors made by the model. PMID:18621410

  10. Synthesis of computer-generated spherical hologram of real object with 360° field of view using a depth camera.

    Science.gov (United States)

    Li, Gang; Phan, Anh-Hoang; Kim, Nam; Park, Jae-Hyeung

    2013-05-20

    A method for synthesizing a 360° computer-generated spherical hologram of real-existing objects is proposed. The whole three-dimensional (3-D) information of a real object is extracted by using a depth camera to capture multiple sides of the object. The point cloud sets which are obtained from corresponding sides of the object surface are brought into a common coordinate system by point cloud registration process. The modeled 3-D point cloud is then processed by hidden point removal method in order to identify visible point set for each spherical hologram point. The hologram on the spherical surface is finally synthesized by accumulating spherical waves from visible object points. By reconstructing partial region of the calculated spherical hologram, the corresponding view of the 3-D real object is obtained. The principle is verified via optical capturing using a depth camera and numerical reconstructions.

  11. Accurate object tracking system by integrating texture and depth cues

    Science.gov (United States)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  12. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  13. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    OpenAIRE

    Feng Xu; Lukas Helfen; Heikki Suhonen; Dan Elgrabli; Sam Bayat; Péter Reischig; Tilo Baumbach; Peter Cloetens

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome...

  14. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    Directory of Open Access Journals (Sweden)

    Javier Contreras

    2015-11-01

    Full Text Available A MATLAB/SIMULINK software simulation model (structure and component blocks has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  15. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    Science.gov (United States)

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  16. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    Directory of Open Access Journals (Sweden)

    Maurizio Muzzupappa

    2013-08-01

    Full Text Available In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.

  17. Object data mining and analysis on 3D images of high precision industrial CT

    International Nuclear Information System (INIS)

    There are some areas of interest on 3D images of the high precision industrial CT, such as defects caused during the production process. In order to take a close analysis of these areas, the image processing software Amira was used on the data of a particular work piece sample to do defects segmentation and display, defects measurement. evaluation and documentation. A data set obtained by scanning a vise sample using the lab CT system was analyzed and the results turn out to be fairly good. (authors)

  18. Eccentricity in Images of Circular and Spherical Targets and its Impact to 3D Object Reconstruction

    Science.gov (United States)

    Luhmann, T.

    2014-06-01

    This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.

  19. Real time moving object detection using motor signal and depth map for robot car

    Science.gov (United States)

    Wu, Hao; Siu, Wan-Chi

    2013-12-01

    Moving object detection from a moving camera is a fundamental task in many applications. For the moving robot car vision, the background movement is 3D motion structure in nature. In this situation, the conventional moving object detection algorithm cannot be use to handle the 3D background modeling effectively and efficiently. In this paper, a novel scheme is proposed by utilizing the motor control signal and depth map obtained from a stereo camera to model the perspective transform matrix between different frames under a moving camera. In our approach, the coordinate relationship between frames during camera moving is modeled by a perspective transform matrix which is obtained by using current motor control signals and the pixel depth value. Hence, the relationship between a static background pixel and the moving foreground corresponding to the camera motion can be related by a perspective matrix. To enhance the robustness of classification, we allowed a tolerance range during the perspective transform matrix prediction and used multi-reference frames to classify the pixel on current frame. The proposed scheme has been found to be able to detect moving objects for our moving robot car efficiently. Different from conventional approaches, our method can model the moving background in 3D structure, without online model training. More importantly, the computational complexity and memory requirement are low making it possible to implement this scheme in real-time, which is even valuable for a robot vision system.

  20. Tailoring bulk mechanical properties of 3D printed objects of polylactic acid varying internal micro-architecture

    Science.gov (United States)

    Malinauskas, Mangirdas; Skliutas, Edvinas; Jonušauskas, Linas; Mizeras, Deividas; Šešok, Andžela; Piskarskas, Algis

    2015-05-01

    Herein we present 3D Printing (3DP) fabrication of structures having internal microarchitecture and characterization of their mechanical properties. Depending on the material, geometry and fill factor, the manufactured objects mechanical performance can be tailored from "hard" to "soft." In this work we employ low-cost fused filament fabrication 3D printer enabling point-by-point structuring of poly(lactic acid) (PLA) with~̴400 µm feature spatial resolution. The chosen architectures are defined as woodpiles (BCC, FCC and 60 deg rotating). The period is chosen to be of 1200 µm corresponding to 800 µm pores. The produced objects structural quality is characterized using scanning electron microscope, their mechanical properties such as flexural modulus, elastic modulus and stiffness are evaluated by measured experimentally using universal TIRAtest2300 machine. Within the limitation of the carried out study we show that the mechanical properties of 3D printed objects can be tuned at least 3 times by only changing the woodpile geometry arrangement, yet keeping the same filling factor and periodicity of the logs. Additionally, we demonstrate custom 3D printed µ-fluidic elements which can serve as cheap, biocompatible and environmentally biodegradable platforms for integrated Lab-On-Chip (LOC) devices.

  1. 3D phase micro-object studies by means of digital holographic tomography supported by algebraic reconstruction technique

    Science.gov (United States)

    Bilski, B. J.; Jozwicka, A.; Kujawinska, M.

    2007-09-01

    Constant development of microelements' technology requires a creation of new instruments to determine their basic physical parameters in 3D. The most efficient non-destructive method providing 3D information is tomography. In this paper we present Digital Holographic Tomography (DHT), in which input data is provided by means of Di-git- al Holography (DH). The main advantage of DH is the capability to capture several projections with a single hologram [1]. However, these projections have uneven angular distribution and their number is significantly limited. Therefore - Algebraic Reconstruction Technique (ART), where a few phase projections may be sufficient for proper 3D phase reconstruction, is implemented. The error analysis of the method and its additional limitations due to shape and dimensions of investigated object are presented. Finally, the results of ART application to DHT method are also presented on data reconstructed from numerically generated hologram of a multimode fibre.

  2. A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision

    OpenAIRE

    Zwinderman, Matthijs; Rybski, Paul E.; Kootstra, Gert

    2010-01-01

    In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views...

  3. Technology of 3D map creation for 'Ukrytie' object internal premises

    International Nuclear Information System (INIS)

    The results of creation of master cells of information technology for mapping of 'Ukryttia' object internal rooms are represented according to materials of digital stereo and photogrammetric processing of shootings results. It is shown that a highly enough accuracy of mutual orientation of snapshots and recovery of separate objects of 'Ukryttia' object rooms is reached. Mean relative error in defining spatial sizes of objects made up 6%. A principle possibility of using offered technology in practical mapping of 'Ukryttia' object rooms is demonstrated. The results of map creation due to proposed technology can be presented as three-dimensional models in AutoCad system for subsequent use

  4. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  5. A HIGHLY COLLIMATED WATER MASER BIPOLAR OUTFLOW IN THE CEPHEUS A HW3d MASSIVE YOUNG STELLAR OBJECT

    Energy Technology Data Exchange (ETDEWEB)

    Chibueze, James O.; Imai, Hiroshi; Tafoya, Daniel; Omodaka, Toshihiro; Chong, Sze-Ning [Department of Physics and Astronomy, Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Korimoto, Kagoshima 890-0065 (Japan); Kameya, Osamu; Hirota, Tomoya [Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Torrelles, Jose M., E-mail: james@milkyway.sci.kagoshima-u.ac.jp [Instituto de Ciencias del Espacio (CSIC)-UB/IEEC, Facultat de Fisica, Universitat de Barcelona, Marti i Franques 1, E-08028 Barcelona (Spain)

    2012-04-01

    We present the results of multi-epoch very long baseline interferometry (VLBI) water (H{sub 2}O) maser observations carried out with the VLBI Exploration of Radio Astrometry toward the Cepheus A HW3d object. We measured for the first time relative proper motions of the H{sub 2}O maser features, whose spatio-kinematics traces a compact bipolar outflow. This outflow looks highly collimated and expanding through {approx}280 AU (400 mas) at a mean velocity of {approx}21 km s{sup -1} ({approx}6 mas yr{sup -1}) without taking into account the turbulent central maser cluster. The opening angle of the outflow is estimated to be {approx}30 Degree-Sign . The dynamical timescale of the outflow is estimated to be {approx}100 years. Our results provide strong support that HW3d harbors an internal massive young star, and the observed outflow could be tracing a very early phase of star formation. We also have analyzed Very Large Array archive data of 1.3 cm continuum emission obtained in 1995 and 2006 toward Cepheus A. The comparative result of the HW3d continuum emission suggests the possibility of the existence of distinct young stellar objects in HW3d and/or strong variability in one of their radio continuum emission components.

  6. Representing Objects using Global 3D Relational Features for Recognition Tasks

    DEFF Research Database (Denmark)

    Mustafa, Wail

    2015-01-01

    In robotic systems, visual interpretations of the environment compose an essential element in a variety of applications, especially those involving manipulation of objects. Interpreting the environment is often done in terms of recognition of objects using machine learning approaches. For user...... representations. For representing objects, we derive global descriptors encoding shape using viewpoint-invariant features obtained from multiple sensors observing the scene. Objects are also described using color independently. This allows for combining color and shape when it is required for the task. For more...... to initiate higher-level semantic interpretations of complex scenes. In the object category recognition task, we present a system that is capable of assigning multiple and nested categories for novel objects using a method developed for this purpose. Integrating this method with other multi-label learning...

  7. Holographic microscopy reconstruction in both object and image half spaces with undistorted 3D grid

    CERN Document Server

    Verrier, Nicolas; Tessier, Gilles; Gross, Michel

    2015-01-01

    We propose a holographic microscopy reconstruction method, which propagates the hologram, in the object half space, in the vicinity of the object. The calibration yields reconstructions with an undistorted reconstruction grid i.e. with orthogonal x, y and z axis and constant pixels pitch. The method is validated with an USAF target imaged by a x60 microscope objective, whose holograms are recorded and reconstructed for different USAF locations along the longitudinal axis:-75 to +75 {\\mu}m. Since the reconstruction numerical phase mask, the reference phase curvature and MO form an afocal device, the reconstruction can be interpreted as occurring equivalently in the object or in image half space.

  8. Controlled experimental study depicting moving objects in view-shared time-resolved 3D MRA.

    Science.gov (United States)

    Mostardi, Petrice M; Haider, Clifton R; Rossman, Phillip J; Borisch, Eric A; Riederer, Stephen J

    2009-07-01

    Various methods have been used for time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of three-dimensional (3D) time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested using view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  9. Controlled Experimental Study Depicting Moving Objects in View-Shared Time-Resolved 3D MRA

    Science.gov (United States)

    Mostardi, Petrice M.; Haider, Clifton R.; Rossman, Phillip J.; Borisch, Eric A.; Riederer, Stephen J.

    2010-01-01

    Various methods have been used for time-resolved contrast-enhanced MRA (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of 3D time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested, which use view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  10. Artificial Vision in 3D Perspective. For Object Detection On Planes, Using Points Clouds.

    Directory of Open Access Journals (Sweden)

    Catalina Alejandra Vázquez Rodriguez

    2014-02-01

    Full Text Available In this paper, we talk about an algorithm of artificial vision for the robot Golem - II + with which to analyze the environment the robot, for the detection of planes and objects in the scene through point clouds, which were captured with kinect device, possible objects and quantity, distance and other characteristics. Subsequently the "clusters" are grouped to identify whether they are located on the same surface, in order to calculate the distance and the slope of the planes relative to the robot, and finally each object separately analyzed to see if it is possible to take them, if they are empty surfaces, may leave objects on them, long as feasible considering a distance, ignoring false positives as the walls and floor, which for these purposes are not of interest since it is not possible to place objects on the walls and floor are out of range of the robot's arms.

  11. Retrieval of 3D-position of a Passive Object Using Infrared LED´s and Photodiodes

    DEFF Research Database (Denmark)

    Christensen, Henrik Vie

    A sensor using infrared emitter/receiver pairs to determine the position of a passive object is presented. An array with a small number of infrared emitter/receiver pairs are proposed as sensing part to acquire information on the object position. The emitters illuminates the object and the intens...... experiments shows good accordance between actual and retrieved positions when tracking a ball. The ball has been successfully replaced by a human hand, and a "3D non-touch screen" with a human hand as "pointing device" is shown possible....

  12. Tracking of Multiple objects Using 3D Scatter Plot Reconstructed by Linear Stereo Vision

    Directory of Open Access Journals (Sweden)

    Safaa Moqqaddem

    2014-10-01

    Full Text Available This paper presents a new method for tracking objects using stereo vision with linear cameras. Edge points extracted from the stereo linear images are first matched to reconstruct points that represent the objects in the scene. To detect the objects, a clustering process based on a spectral analysis is then applied to the reconstructed points. The obtained clusters are finally tracked throughout their center of gravity using Kalman filter and a Nearest Neighbour based data association algorithm. Experimental results using real stereo linear images are shown to demonstrate the effectiveness of the proposed method for obstacle tracking in front of a vehicle.

  13. A Method of Calculating the 3D Coordinates on a Micro Object in a Virtual Micro-Operation System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A simple method for calculating the 3D coordinates of points on a micro object in a multi-camera system is proposed. It simplifies the algorithms used in traditional computer vision system by eliminating the calculation of the CCD ( charge coupled device)camera parameters and the relative position between cameras, and using solid geometry in the calculation procedures instead of the calculation of the complex matrixes. The algorithm was used in the research of generating a virtual magnified 3D image of a micro object to be operated in a micro operation system, and the satisfactory results were obtained. The application in a virtual tele-operation system for a dexterous mechanical gripper is under test.

  14. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    Energy Technology Data Exchange (ETDEWEB)

    Frary, R.; Louie, J. [UNR; Pullammanappallil, S. [Optim; Eisses, A.

    2016-08-01

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  15. 3D micro-XRF for cultural heritage objects: new analysis strategies for the investigation of the Dead Sea Scrolls.

    Science.gov (United States)

    Mantouvalou, Ioanna; Wolff, Timo; Hahn, Oliver; Rabin, Ira; Lühl, Lars; Pagels, Marcel; Malzer, Wolfgang; Kanngiesser, Birgit

    2011-08-15

    A combination of 3D micro X-ray fluorescence spectroscopy (3D micro-XRF) and micro-XRF was utilized for the investigation of a small collection of highly heterogeneous, partly degraded Dead Sea Scroll parchment samples from known excavation sites. The quantitative combination of the two techniques proves to be suitable for the identification of reliable marker elements which may be used for classification and provenance studies. With 3D micro-XRF, the three-dimensional nature, i.e. the depth-resolved elemental composition as well as density variations, of the samples was investigated and bromine could be identified as a suitable marker element. It is shown through a comparison of quantitative and semiquantitative values for the bromine content derived using both techniques that, for elements which are homogeneously distributed in the sample matrix, quantification with micro-XRF using a one-layer model is feasible. Thus, the possibility for routine provenance studies using portable micro-XRF instrumentation on a vast amount of samples, even on site, is obtained through this work. PMID:21711051

  16. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    Science.gov (United States)

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.

  17. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    Science.gov (United States)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  18. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    Science.gov (United States)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  19. Binocular visual tracking and grasping of a moving object with a 3D trajectory predictor

    Directory of Open Access Journals (Sweden)

    J. Fuentes‐Pacheco

    2009-12-01

    Full Text Available This paper presents a binocular eye‐to‐hand visual servoing system that is able to track and grasp a moving object in real time.Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting futurepositions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with sixdegrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client‐server architecture and iscomposed of two main parts: the vision system and the control system. The vision system uses color detection to extract theobject from the background and a tracking technique based on search windows and object moments. The control system usesthe RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port.Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.

  20. 3D shape shearography with integrated structured light projection for strain inspection of curved objects

    NARCIS (Netherlands)

    Anisimov, A.; Groves, R.M.

    2015-01-01

    Shearography (speckle pattern shearing interferometry) is a non-destructive testing technique that provides full-field surface strain characterization. Although real-life objects especially in aerospace, transport or cultural heritage are not flat (e.g. aircraft leading edges or sculptures), their i

  1. Depth map calculation for a variable number of moving objects using Markov sequential object processes

    NARCIS (Netherlands)

    Lieshout, M.N.M. van

    2008-01-01

    We advocate the use of Markov sequential object processes for tracking a variable number of moving objects through video frames with a view towards depth calculation. A regression model based on a sequential object process quantifies goodness of fit; regularization terms are incorporated to control

  2. THREE-IMAGE MATCHING.FOR 3-D LINEAR OBJECT TRACKING

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper will discuss strategies for trinocular image rectification and matching for linear object tracking. It is well known that a pair of stereo images generates two epipolar images. Three overlapped images can yield six epipolar images in situations where any two are required to be rectified for the purpose of image matching. In this case,the search for feature correspondences is computationally intensive and matching complexity increases. A special epipolar image rectification for three stereo images, which simplifies the image matching process, is therefore proposed. This method generates only three rectified images, with the result that the search for matching features becomes more straightforward. With the three rectified images, a particular line-segment-based correspondence strategy is suggested. The primary characteristics of the feature correspondence strategy include application of specific epipolar geometric constraints and reference to three-ray triangulation residuals in object space.

  3. Spatio-Temporal Video Object Segmentation via Scale-Adaptive 3D Structure Tensor

    Directory of Open Access Journals (Sweden)

    Hai-Yun Wang

    2004-06-01

    Full Text Available To address multiple motions and deformable objects' motions encountered in existing region-based approaches, an automatic video object (VO segmentation methodology is proposed in this paper by exploiting the duality of image segmentation and motion estimation such that spatial and temporal information could assist each other to jointly yield much improved segmentation results. The key novelties of our method are (1 scale-adaptive tensor computation, (2 spatial-constrained motion mask generation without invoking dense motion-field computation, (3 rigidity analysis, (4 motion mask generation and selection, and (5 motion-constrained spatial region merging. Experimental results demonstrate that these novelties jointly contribute much more accurate VO segmentation both in spatial and temporal domains.

  4. A Morphological Analysis of Audio Objects and their Control Methods for 3D Audio

    OpenAIRE

    Mathew, Justin; Huot, Stéphane; Blum, Alan

    2014-01-01

    International audience Recent technological improvements in audio reproduction systems increased the possibilities to spatialize sources in a listening environment. The spatialization of reproduced audio is highly dependent on the recording technique, the rendering method, and the loudspeaker configuration. While object-based audio production reduces this dependency on loudspeaker configurations, related authoring tools are still difficult to interact with. In this paper, we investigate th...

  5. Real time object recognition and tracking using 2D/3D images

    OpenAIRE

    Ghobadi, Seyed Eghbal

    2010-01-01

    Object recognition and tracking are the main tasks in computer vision applications such as safety, surveillance, human-robot-interaction, driving assistance system, traffic monitoring, remote surgery, medical reasoning and many more. In all these applications the aim is to bring the visual perception capabilities of the human being into the machines and computers. In this context many significant researches have recently been conducted to open new horizons in computer vision by...

  6. Multi sensor fusion of camera and 3D laser range finder for object recognition

    OpenAIRE

    Klimentjew, Denis; Hendrich, Norman; Zhang, jianwei

    2010-01-01

    This paper proposes multi sensor fusion based on an effective calibration method for a perception system designed for mobile robots and intended for later object recognition. The perception system consists of a camera and a three-dimensional laser range finder. The three-dimensional laser range finder is based on a two-dimensional laser scanner and a pan-tilt unit as a moving platform. The calibration permits the coalescence of the two most important sensors for three-dim...

  7. ELTs Adaptive Optics for Multi-Objects 3D Spectroscopy Key Parameters and Design Rules

    CERN Document Server

    Neichel, B; Fusco, T; Gendron, E; Puech, M; Rousset, G; Hammer, F

    2006-01-01

    In the last few years, new Adaptive Optics [AO] techniques have emerged to answer new astronomical challenges: Ground-Layer AO [GLAO] and Multi-Conjugate AO [MCAO] to access a wider Field of View [FoV], Multi-Object AO [MOAO] for the simultaneous observation of several faint galaxies, eXtreme AO [XAO] for the detection of faint companions. In this paper, we focus our study to one of these applications : high red-shift galaxy observations using MOAO techniques in the framework of Extremely Large Telescopes [ELTs]. We present the high-level specifications of a dedicated instrument. We choose to describe the scientific requirements with the following criteria : 40% of Ensquared Energy [EE] in H band (1.65um) and in an aperture size from 25 to 150 mas. Considering these specifications we investigate different AO solutions thanks to Fourier based simulations. Sky Coverage [SC] is computed for Natural and Laser Guide Stars [NGS, LGS] systems. We show that specifications are met for NGS-based systems at the cost of ...

  8. Objective Assessment of shoulder mobility with a new 3D gyroscope - a validation study

    Directory of Open Access Journals (Sweden)

    Lakemeier Stefan

    2011-07-01

    Full Text Available Abstract Background Assessment of shoulder mobility is essential for clinical follow-up of shoulder treatment. Only a few high sophisticated instruments for objective measurements of shoulder mobility are available. The interobserver dependency of conventional goniometer measurements is high. In the 1990s an isokinetic measuring system of BIODEX Inc. was introduced, which is a very complex but valid instrument. Since 2008 a new user-friendly system called DynaPort MiniMod TriGyro ShoulderTest-System (DP is available. Aim of this study is the validation of this measuring instrument using the BIODEX-System. Methods The BIODEX is a computerized robotic dynamometer used for isokinetic testing and training of athletes. Because of its size the system needs to be installed in a separated room. The DP is a small, light-weighted three-dimensional gyroscope that is fixed on the distal upper patient arm, recording abduction, flexion and rotation. For direct comparison we fixed the DP on the lever arm of the BIODEX. The accuracy of measurement was determined at different positions, angles and distances from the centre of rotation (COR as well as different velocities in a radius between 0° - 180° in steps of 20°. All measurements were repeated 10 times. As satisfactory accuracy a difference between both systems below 5° was defined. The statistical analysis was performed with a linear regression model. Results The evaluation shows very high accuracy of measurements. The maximum average deviation is below 2.1°. For a small range of motion the DP is slightly underestimating comparing the BIODEX, whereas for higher angles increasing positive differences are observed. The distance to the COR as well as the position of the DP on the lever arm have no significant influence. Concerning different motion speeds significant but not relevant influence is detected. Unfortunately device related effects are observed, leading to differences between repeated

  9. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    International Nuclear Information System (INIS)

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured for TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery

  10. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Suh, T [Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Han, B; Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); Jenkins, C [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); Department of Mechanical Engineering, Stanford University, Palo Alto, CA (United States)

    2015-06-15

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured for TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery.

  11. Real-Time Propagation Measurement System and Scattering Object Identification by 3D Visualization by Using VRML for ETC System

    Directory of Open Access Journals (Sweden)

    Ando Tetsuo

    2009-01-01

    Full Text Available In the early deployment of electric toll collecting (ETC system, multipath interference has caused the malfunction of the system. Therefore, radio absorbers are installed in the toll gate to suppress the scattering effects. This paper presents a novel radio propagation measurement system using the beamforming with 8-elmenet antenna array to examine the power intensity distribution of the ETC gate in real time without closing the toll gates that are already open for traffic. In addition, an identification method of the individual scattering objects with 3D visualization by using virtual reality modeling language will be proposed and the validity is also demonstrated by applying to the measurement data.

  12. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects.

    Science.gov (United States)

    Ye, Zhou; Nain, Amrinder S; Behkam, Bahareh

    2016-07-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10(-7) m(2) s(-1)) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b(1.5)∝D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features. PMID:27283144

  13. SSV3D: Simulador de Sombras Vectoriales por Radiación Solar sobre Objetos Tridimensionales SSV3D: Simulator of Vectorial Shadows by Solar Radiation on 3D Computerized Objects

    Directory of Open Access Journals (Sweden)

    S. Gómez

    2005-01-01

    Full Text Available Se presenta un simulador de sombras vectoriales por radiación solar sobre objetos tridimensionales, SSV3D, una herramienta de computación gráfica desarrollada sobre la plataforma tridimensional del AUTOCAD 2004. El software simula vectorialmente la radiación solar directa, calculando y trazando los contornos de sombra sobre los planos iluminados del modelo 3D evaluado. En el desarrollo de la herramienta se comprobaron los resultados analíticos mediante su comparación con los obtenidos en las fórmulas de una hoja de cálculo, y de los resultados gráficos mediante comparación con las sombras arrojadas por simulación con un heliodón de tecnología francesa y por el Render de AUTOCAD. El simulador SSV3D respondió satisfactoriamente a las necesidades de estudio de sistemas de protección solar en investigaciones desarrolladas anteriormente.SSV3D is presented as a graphic computer tool developed on the three-dimensional platform of AUTOCAD 2004, which simulates direct solar radiation by measuring and vectorial tracing of shadow outlines on illuminated plans of the 3D model evaluated. The analytical results of this tool were tested during its' development by comparing its' results with those obtained in the formula of a calculus sheet, and graphic results were checked comparing these to the shadows obtained by simulation using physical models in a heliodon (French technology and by the Render of AUTOCAD. The SSV3D simulator responded satisfactorily to the requirements for the study of solar protection systems which had been determined in previous research.

  14. Determinig of an object orientation in 3D space using direction cosine matrix and non-stationary Kalman filter

    Directory of Open Access Journals (Sweden)

    Bieda Robert

    2016-06-01

    Full Text Available This paper describes a method which determines the parameters of an object orientation in 3D space. The rotation angles calculation bases on the signals fusion obtained from the inertial measurement unit (IMU. The IMU measuring system provides information from a linear acceleration sensors (accelerometers, the Earth’s magnetic field sensors (magnetometers and the angular velocity sensors (gyroscopes. Information about the object orientation is presented in the form of direction cosine matrix whose elements are observed in the state vector of the non-stationary Kalman filter. The vector components allow to determine the rotation angles (roll, pitch and yaw associated with the object. The resulting waveforms, for different rotation angles, have no negative attributes associated with the construction and operation of the IMU measuring system. The described solution enables simple, fast and effective implementation of the proposed method in the IMU measuring systems.

  15. Visual retrieval of known objects using supplementary depth data

    Science.gov (United States)

    Śluzek, Andrzej

    2016-06-01

    A simple modification of typical content-based visual information retrieval (CBVIR) techniques (e.g. MSER keypoints represented by SIFT descriptors quantized into sufficiently large vocabularies) is discussed and preliminarily evaluated. By using the approximate depths (as the supplementary data) of the detected keypoints, we can significantly improve credibility of keypoint matching so that known objects (i.e. objects for which exemplary images are available in the database) can be detected at low computational costs. Thus, the method can be particularly useful in real-time applications of machine vision systems (e.g. in intelligent robotic devices). The paper presents theoretical model of the method and provides exemplary results for selected scenarios.

  16. Parallel phase-shifting digital holography and its application to high-speed 3D imaging of dynamic object

    Science.gov (United States)

    Awatsuji, Yasuhiro; Xia, Peng; Wang, Yexin; Matoba, Osamu

    2016-03-01

    Digital holography is a technique of 3D measurement of object. The technique uses an image sensor to record the interference fringe image containing the complex amplitude of object, and numerically reconstructs the complex amplitude by computer. Parallel phase-shifting digital holography is capable of accurate 3D measurement of dynamic object. This is because this technique can reconstruct the complex amplitude of object, on which the undesired images are not superimposed, form a single hologram. The undesired images are the non-diffraction wave and the conjugate image which are associated with holography. In parallel phase-shifting digital holography, a hologram, whose phase of the reference wave is spatially and periodically shifted every other pixel, is recorded to obtain complex amplitude of object by single-shot exposure. The recorded hologram is decomposed into multiple holograms required for phase-shifting digital holography. The complex amplitude of the object is free from the undesired images is reconstructed from the multiple holograms. To validate parallel phase-shifting digital holography, a high-speed parallel phase-shifting digital holography system was constructed. The system consists of a Mach-Zehnder interferometer, a continuous-wave laser, and a high-speed polarization imaging camera. Phase motion picture of dynamic air flow sprayed from a nozzle was recorded at 180,000 frames per second (FPS) have been recorded by the system. Also phase motion picture of dynamic air induced by discharge between two electrodes has been recorded at 1,000,000 FPS, when high voltage was applied between the electrodes.

  17. 一种三维物体相息图的迭代计算方法%An Iterative Algorithm for Kinoform Computation of 3D Object

    Institute of Scientific and Technical Information of China (English)

    裴闯; 蒋晓瑜; 王加; 张鹏炜

    2013-01-01

    在传统迭代傅里叶变换算法的基础上,提出了一种计算三维物体相息图的新方法.基于层析法将三维物体的多个分层物面作为衍射再现图像,在一个输入面(相息图)和多个输出面(再现像)之间进行迭代.通过在傅里叶迭代运算中引入距离相位因子,表示物体不同物面的深度,体现了物体的三维特征.实验结果证明了本文算法良好的收敛特性和再现性能.最后,分析了物面数量和间距对全息再现质量的影响,利用液晶空间光调制器采用时分复用的方法还原了三维物体的多个物面.%A novel method for computing kinoform of 3D object based on traditional iterative Fourier transform algorithm is described. The method divides three-dimensional object into many object planes by tomographic technique and treat every object plane as a target image, then iterative computation is carried out between one input plane(kinoform) and several output planes (reconstruction images). A space phase factor is added into iterative process to represent depth characters of 3D object. The experimental result shows that this algorithm computational and convergent velocity is fast. At last, the influences of object planes number and distance to reconstruction quality of kinoform are analyzed, and time-division multiplexing technique is used to reconstruct several object planes based on spatial light modulator.

  18. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    Science.gov (United States)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  19. Orientation of a 3D object: implementation with an artificial neural network using a programmable logic device

    International Nuclear Information System (INIS)

    Complex information extraction from images is a key skill of intelligent machines, with wide application in automated systems, robotic manipulation and human-computer interaction. However, solving this problem with traditional, geometric or analytical, strategies is extremely difficult. Therefore, an approach based on learning from examples seems to be more appropriate. This thesis addresses the problem of 3D orientation, aiming to estimate the angular coordinates of a known object from an image shot from any direction. We describe a system based on artificial neural networks to solve this problem in real time. The implementation is performed using a programmable logic device. The digital system described in this paper has the ability to estimate two rotational coordinates of a 3D known object, in ranges from -800 to 800. The operation speed allows a real time performance at video rate. The system accuracy can be successively increased by increasing the size of the artificial neural network and using a larger number of training examples

  20. A single photon detector array with 64x64 resolution and millimetric depth accuracy for 3D imaging

    OpenAIRE

    Niclass, Cristiano; Charbon, Edoardo

    2005-01-01

    An avalanche photodiode array uses single-photon counting to perform time-of-flight range-finding on a scene uniformly hit by 100ps 250mW uncollimated laser pulses. The 32x32 pixel sensor, fabricated in a 0.8μm CMOS process uses a microscanner package to enhance the effective resolution in the application to 64x64 pixels. The application achieves a measurement depth resolution of 1.3mm to a depth of 3.75m.

  1. Method for 3D Object Reconstruction Using Several Portion of 2D Images from the Different Aspects Acquired with Image Scopes Included in the Fiber Retractor

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2012-12-01

    Full Text Available Method for 3D object reconstruction using several portions of 2D images from the different aspects which are acquired with image scopes included in the fiber retractor is proposed. Experimental results show a great possibilityfor reconstruction of acceptable quality of 3D object on the computer with several imageswhich are viewed from the different aspects of 2D images.

  2. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR – combining techniques towards an objective multiparametric interpretation

    Directory of Open Access Journals (Sweden)

    S. Schneiderwind

    2015-09-01

    Full Text Available Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-investigations.

  3. Ryukyu Subduction Zone: 3D Geodynamic Simulations of the Effects of Slab Shape and Depth on Lattice-Preferred Orientation (LPO) and Seismic Anisotropy

    Science.gov (United States)

    Tarlow, S.; Tan, E.; Billen, M. I.

    2015-12-01

    At the Ryukyu subduction zone, seismic anisotropy observations suggest that there may be strong trench-parallel flow within the mantle wedge driven by complex 3D slab geometry. However, previous simulations have either failed to account for 3D flow or used the infinite strain axis (ISA) approximation for LPO, which is known to be inaccurate in complex flow fields. Additionally, both the slab depth and shape of the Ryukyu slab are contentious. Development of strong trench-parallel flow requires low viscosity to decouple the mantle wedge from entrainment by the sinking slab. Therefore, understanding the relationship between seismic anisotropy and the accompanying flow field will better constrain the material and dynamic properties of the mantle near subduction zones. In this study, we integrate a kinematic model for calculation of LPO (D-Rex) into a buoyancy-driven, instantaneous 3D flow simulation (ASPECT), using composite non-Newtonian rheology to investigate the dependence of LPO on slab geometry and depth at the Ryukyu Trench. To incorporate the 3D flow effects, the trench and slab extends from the southern tip of Japan to the western edge of Taiwan and the model region is approximately 1/4 of a spherical shell extending from the surface to the core-mantle boundary. In the southern-most region we vary the slab depth and shape to test for the effects of the uncertainties in the observations. We also investigate the effect of adding locally hydrated regions above the slab that affect both the mantle rheology and development of LPO through the consequent changes in mantle flow and dominate (weakest) slip system. We characterize how changes in the simulation conditions affect the LPO within the mantle wedge, subducting slab and sub-slab mantle and relate these to surface observations of seismic anisotropy.

  4. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    Science.gov (United States)

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  5. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    Science.gov (United States)

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  6. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Science.gov (United States)

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  7. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    Science.gov (United States)

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  8. Depth-of-Focus Correction in Single-Molecule Data Allows Analysis of 3D Diffusion of the Glucocorticoid Receptor in the Nucleus.

    Directory of Open Access Journals (Sweden)

    Rolf Harkes

    Full Text Available Single-molecule imaging of proteins in a 2D environment like membranes has been frequently used to extract diffusive properties of multiple fractions of receptors. In a 3D environment the apparent fractions however change with observation time due to the movements of molecules out of the depth-of-field of the microscope. Here we developed a mathematical framework that allowed us to correct for the change in fraction size due to the limited detection volume in 3D single-molecule imaging. We applied our findings on the mobility of activated glucocorticoid receptors in the cell nucleus, and found a freely diffusing fraction of 0.49±0.02. Our analysis further showed that interchange between this mobile fraction and an immobile fraction does not occur on time scales shorter than 150 ms.

  9. A new 3D Moho depth model for Iran based on the terrestrial gravity data and EGM2008 model

    Science.gov (United States)

    Kiamehr, R.; Gómez-Ortiz, D.

    2009-04-01

    Knowledge of the variation of crustal thickness is essential in many applications, such as forward dynamic modelling, numerical heat flow calculations and seismologic applications. Dehghani in 1984 estimated the first Moho depth model over the Iranian plateau using the simple profiling method and Bouguer gravity data. However, these data are high deficiencies and lack of coverage in most part of the region. To provide a basis for an accurate analysis of the region's lithospheric stresses, we develop an up to date three dimensional crustal thickness model of the Iranian Plateau using Parker-Oldenburg iterative method. This method is based on a relationship between the Fourier transform of the gravity anomaly and the sum of the Fourier transform of the interface topography. The new model is based on the new and most complete gravity database of Iran which is produced by Kiamehr for computation of the high resolution geoid model for Iran. Total number of 26125 gravity data were collected from different sources and used for generation an outlier-free 2x2 minutes gravity database for Iran. At the mean time, the Earth Gravitational Model (EGM2008) up to degree 2160 has been developed and published by National Geospatial Intelligence Agency. EGM2008 incorporates improved 5x5 minutes gravity anomalies and has benefited from the latest GRACE based satellite solutions. The major benefit of the EGM2008 is its ability to provide precise and uniform gravity data with global data coverage. Two different Moho depth models have been computed based on the terrestrial and EGM2008 datasets. The minimum and maximum Moho depths for land and EGM2008 models are 10.85-53.86 and 15.41-51.43 km, respectively. In general, we found a good agreement between the Moho geometry obtained using both land and EGM2008 datasets with the RMS of 2.7 km. Also, we had a comparison between these gravimetric Moho models versus global seismic crustal models CRUST 2.0. The differences between EGM2008 and land

  10. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    Science.gov (United States)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  11. Identifying Objective EEG Based Markers of Linear Vection in Depth

    Science.gov (United States)

    Palmisano, Stephen; Barry, Robert J.; De Blasio, Frances M.; Fogarty, Jack S.

    2016-01-01

    This proof-of-concept study investigated whether a time-frequency EEG approach could be used to examine vection (i.e., illusions of self-motion). In the main experiment, we compared the event-related spectral perturbation (ERSP) data of 10 observers during and directly after repeated exposures to two different types of optic flow display (each was 35° wide by 29° high and provided 20 s of motion stimulation). Displays consisted of either a vection display (which simulated constant velocity forward self-motion in depth) or a control display (a spatially scrambled version of the vection display). ERSP data were decomposed using time-frequency Principal Components Analysis (t–f PCA). We found an increase in 10 Hz alpha activity, peaking some 14 s after display motion commenced, which was positively associated with stronger vection ratings. This followed decreases in beta activity, and was also followed by a decrease in delta activity; these decreases in EEG amplitudes were negatively related to the intensity of the vection experience. After display motion ceased, a series of increases in the alpha band also correlated with vection intensity, and appear to reflect vection- and/or motion-aftereffects, as well as later cognitive preparation for reporting the strength of the vection experience. Overall, these findings provide support for the notion that EEG can be used to provide objective markers of changes in both vection status (i.e., “vection/no vection”) and vection strength. PMID:27559328

  12. Impacts of 3-D radiative effects on satellite cloud detection and their consequences on cloud fraction and aerosol optical depth retrievals

    Science.gov (United States)

    Yang, Yuekui; di Girolamo, Larry

    2008-02-01

    We present the first examination on how 3-D radiative transfer impacts satellite cloud detection that uses a single visible channel threshold. The 3-D radiative transfer through predefined heterogeneous cloud fields embedded in a range of horizontally homogeneous aerosol fields have been carried out to generate synthetic nadir-viewing satellite images at a wavelength of 0.67 μm. The finest spatial resolution of the cloud field is 30 m. We show that 3-D radiative effects cause significant histogram overlap between the radiance distribution of clear and cloudy pixels, the degree to which depends on many factors (resolution, solar zenith angle, surface reflectance, aerosol optical depth (AOD), cloud top variability, etc.). This overlap precludes the existence of a threshold that can correctly separate all clear pixels from cloudy pixels. The region of clear/cloud radiance overlap includes moderately large (up to 5 in our simulations) cloud optical depths. Purpose-driven cloud masks, defined by different thresholds, are applied to the simulated images to examine their impact on retrieving cloud fraction and AOD. Large (up to 100s of %) systematic errors were observed that depended on the type of cloud mask and the factors that influence the clear/cloud radiance overlap, with a strong dependence on solar zenith angle. Different strategies in computing domain-averaged AOD were performed showing that the domain-averaged BRF from all clear pixels produced the smallest AOD biases with the weakest (but still large) dependence on solar zenith angle. The large dependence of the bias on solar zenith angle has serious implications for climate research that uses satellite cloud and aerosol products.

  13. Object-oriented philosophy in designing adaptive finite-element package for 3D elliptic deferential equations

    Science.gov (United States)

    Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.

    2007-12-01

    Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.

  14. EUROPEANA AND 3D

    Directory of Open Access Journals (Sweden)

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  15. Reconstruction and analysis of shapes from 3D scans

    NARCIS (Netherlands)

    ter Haar, F.B.

    2009-01-01

    In this thesis we use 3D laser range scans for the acquisition, reconstruction, and analysis of 3D shapes. 3D laser range scanning has proven to be a fast and effective way to capture the surface of an object in a computer. Thousands of depth measurements represent a part of the surface geometry as

  16. Depth-selective imaging of macroscopic objects hidden behind a scattering layer using low-coherence and wide-field interferometry

    Science.gov (United States)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Ko, Hakseok; Choi, Wonshik

    2016-08-01

    Imaging systems targeting macroscopic objects tend to have poor depth selectivity. In this Letter, we present a 3D imaging system featuring a depth resolution of 200 μm, depth scanning range of more than 1 m, and view field larger than 70×70 mm2. For depth selectivity, we set up an off-axis digital holographic imaging system using a light source with a coherence length of 400 μm. A prism pair was installed in the reference beam path for long-range depth scanning. We performed imaging macroscopic targets with multiple different layers and also demonstrated imaging targets hidden behind a scattering layer.

  17. 3D Object Visual Tracking for the 220 kV/330 kV High-Voltage Live-Line Insulator Cleaning Robot

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian; YANG Ru-qing

    2009-01-01

    The 3D object visual tracking problem is studied for the robot vision system of the 220 kV/330 kV high-voltage live-line insulator cleaning robot. The SUSAN Edge based Scale Invariant Feature (SESIF) algorithm based 3D objects visual tracking is achieved in three stages: the first frame stage, tracking stage, and recovering stage. An SESIF based objects recognition algorithm is proposed to fred initial location at both the first frame stage and recovering stage. An SESIF and Lie group based visual tracking algorithm is used to track 3D object. Experiments verify the algorithm's robustness. This algorithm will be used in the second generation of the 220 kV/330 kV high-voltage five-line insulator cleaning robot.

  18. A model for calculating the errors of 2D bulk analysis relative to the true 3D bulk composition of an object, with application to chondrules

    Science.gov (United States)

    Hezel, Dominik C.

    2007-09-01

    Certain problems in Geosciences require knowledge of the chemical bulk composition of objects, such as, for example, minerals or lithic clasts. This 3D bulk chemical composition (bcc) is often difficult to obtain, but if the object is prepared as a thin or thick polished section a 2D bcc can be easily determined using, for example, an electron microprobe. The 2D bcc contains an error relative to the true 3D bcc that is unknown. Here I present a computer program that calculates this error, which is represented as the standard deviation of the 2D bcc relative to the real 3D bcc. A requirement for such calculations is an approximate structure of the 3D object. In petrological applications, the known fabrics of rocks facilitate modeling. The size of the standard deviation depends on (1) the modal abundance of the phases, (2) the element concentration differences between phases and (3) the distribution of the phases, i.e. the homogeneity/heterogeneity of the object considered. A newly introduced parameter " τ" is used as a measure of this homogeneity/heterogeneity. Accessory phases, which do not necessarily appear in 2D thin sections, are a second source of error, in particular if they contain high concentrations of specific elements. An abundance of only 1 vol% of an accessory phase may raise the 3D bcc of an element by up to a factor of ˜8. The code can be queried as to whether broad beam, point, line or area analysis technique is best for obtaining 2D bcc. No general conclusion can be deduced, as the error rates of these techniques depend on the specific structure of the object considered. As an example chondrules—rapidly solidified melt droplets of chondritic meteorites—are used. It is demonstrated that 2D bcc may be used to reveal trends in the chemistry of 3D objects.

  19. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    Science.gov (United States)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  20. Algorithm and System of Scanning Color 3D Objects%三维彩色扫描系统及算法

    Institute of Scientific and Technical Information of China (English)

    许智钦; 孙长库; 郑义忠

    2002-01-01

    This paper presents a complete system for scanning the geometry and texture of a large 3D object, then the automatic registration is performed to obtain a whole realistic 3D model. This system is composed of one line-strip laser and one color CCD camera. The scanned object is pictured twice by a color CCD camera. First, the texture of the scanned object is taken by a color CCD camera. Then the 3D information of the scanned object is obtained from laser plane equations. This paper presents a practical way to implement the three-dimensional measuring method and the automatic registration of a large 3D object and a pretty good result is obtained after experiment verification.%提出了一种大尺寸3D物体几何形状的扫描测量系统,该系统由一个线结构光源和一个彩色CCD摄像机组成.CCD摄像机两次摄取被扫描物体的图像,首先从激光平面中获取被扫描物体的三维信息,然后将3D物体的彩色信息和三维信息自动进行叠合,以获得3D物体的真实模型.

  1. Objective assessment and design improvement of a staring, sparse transducer array by the spatial crosstalk matrix for 3D photoacoustic tomography.

    Directory of Open Access Journals (Sweden)

    Philip Wong

    Full Text Available Accurate reconstruction of 3D photoacoustic (PA images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT staring array system and analyze system performance using singular value decomposition (SVD. The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate. The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization.

  2. Modeling of 3-D Object Manipulation by Multi-Joint Robot Fingers under Non-Holonomic Constraints and Stable Blind Grasping

    Science.gov (United States)

    Arimoto, Suguru; Yoshida, Morio; Bae, Ji-Hun

    This paper derives a mathematical model that expresses motion of a pair of multi-joint robot fingers with hemi-spherical rigid ends grasping and manipulating a 3-D rigid object with parallel flat surfaces. Rolling contacts arising between finger-ends and object surfaces are taken into consideration and modeled as Pfaffian constraints from which constraint forces emerge tangentially to the object surfaces. Another noteworthy difference of modeling of motion of a 3-D object from that of a 2-D object is that the instantaneous axis of rotation of the object is fixed in the 2-D case but that is time-varying in the 3-D case. A further difficulty that has prevented us to model 3-D physical interactions between a pair of fingers and a rigid object lies in the problem of treating spinning motion that may arise around the opposing axis from a contact point between one finger-end with one side of the object to another contact point. This paper shows that, once such spinning motion stops as the object mass center approaches just beneath the opposition axis, then this cease of spinning evokes a further nonholonomic constraint. Hence, the multi-body dynamics of the overall fingers-object system is subject to non-holonomic constraints concerning a 3-D orthogonal matrix expressing three mutually orthogonal unit vectors fixed at the object together with an extra non-holonomic constraint that the instantaneous axis of rotation of the object is always orthogonal to the opposing axis. It is shown that Lagrange's equation of motion of the overall system can be derived without violating the causality that governs the non-holonomic constraints. This immediately suggests possible construction of a numerical simulator of multi-body dynamics that can express motion of the fingers and object physically interactive to each other. By referring to the fact that human grasp an object in the form of precision prehension dynamically and stably by using opposable force between the thumb and another

  3. Analyse subjective et évaluation objective de la qualité perceptuelle des maillages 3D

    OpenAIRE

    Torkhani, Fakhri

    2014-01-01

    Les maillages 3D polygonaux sont largement utilisés dans diverses applications telles que le divertissement numérique, la conception assistée par ordinateur et l'imagerie médicale. Un maillage peut être soumis à différents types d'opérations comme la compression, le tatouage ou la simplification qui introduisent des distorsions géométriques (modifications) à la version originale. Il est important de quantifier ces modification introduites au maillage d'origine et d'évaluer la qualité perceptu...

  4. Analyse subjective et évaluation objective de la qualité perceptuelle des maillages 3D

    OpenAIRE

    Torkhani, Fakhri

    2014-01-01

    Les maillages 3D polygonaux sont largement utilisés dans diverses applications telles que le divertissement numérique, la conception assistée par ordinateur et l’imagerie médicale. Un maillage peut être soumis à différents types d’opérations comme la compression, le tatouage ou la simplification qui introduisent des distorsions géométriques (modifications) par rapport à la version originale. Il est important de quantifier ces modification introduites au maillage d’origine et d’évaluer la qual...

  5. Comparison of publically available Moho depth and crustal thickness grids with newly derived grids by 3D gravity inversion for the High Arctic region.

    Science.gov (United States)

    Lebedeva-Ivanova, Nina; Gaina, Carmen; Minakov, Alexander; Kashubin, Sergey

    2016-04-01

    We derived Moho depth and crustal thickness for the High Arctic region by 3D forward and inverse gravity modelling method in the spectral domain (Minakov et al. 2012) using lithosphere thermal gravity anomaly correction (Alvey et al., 2008); a vertical density variation for the sedimentary layer and lateral crustal variation density. Recently updated grids of bathymetry (Jakobsson et al., 2012), gravity anomaly (Gaina et al, 2011) and dynamic topography (Spasojevic & Gurnis, 2012) were used as input data for the algorithm. TeMAr sedimentary thickness grid (Petrov et al., 2013) was modified according to the most recently published seismic data, and was re-gridded and utilized as input data. Other input parameters for the algorithm were calibrated using seismic crustal scale profiles. The results are numerically compared with publically available grids of the Moho depth and crustal thickness for the High Arctic region (CRUST 1 and GEMMA global grids; the deep Arctic Ocean grids by Glebovsky et al., 2013) and seismic crustal scale profiles. The global grids provide coarser resolution of 0.5-1.0 geographic degrees and not focused on the High Arctic region. Our grids better capture all main features of the region and show smaller error in relation to the seismic crustal profiles compare to CRUST 1 and GEMMA grids. Results of 3D gravity modelling by Glebovsky et al. (2013) with separated geostructures approach show also good fit with seismic profiles; however these grids cover the deep part of the Arctic Ocean only. Alvey A, Gaina C, Kusznir NJ, Torsvik TH (2008). Integrated crustal thickness mapping and plate recon-structions for the high Arctic. Earth Planet Sci Lett 274:310-321. Gaina C, Werner SC, Saltus R, Maus S (2011). Circum-Arctic mapping project: new magnetic and gravity anomaly maps of the Arctic. Geol Soc Lond Mem 35, 39-48. Glebovsky V.Yu., Astafurova E.G., Chernykh A.A., Korneva M.A., Kaminsky V.D., Poselov V.A. (2013). Thickness of the Earth's crust in the

  6. What is 3D good for? A review of human performance on stereoscopic 3D displays

    Science.gov (United States)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  7. Developing a 3-D Digital Heritage Ecosystem: from object to representation and the role of a virtual museum in the 21st century

    Directory of Open Access Journals (Sweden)

    Fred Limp

    2011-07-01

    Full Text Available This article addresses the application of high-precision 3-D recording methods to heritage materials (portable objects, the technical processes involved, the various digital products and the role of 3-D recording in larger questions of scholarship and public interpretation. It argues that the acquisition and creation of digital representations of heritage must be part of a comprehensive research infrastructure (a digital ecosystem that focuses on all of the elements involved, including (a recording methods and metadata, (b digital object discovery and access, (c citation of digital objects, (d analysis and study, (e digital object reuse and repurposing, and (f the critical role of a national/international digital archive. The article illustrates these elements and their relationships using two case studies that involve similar approaches to the high-precision 3-D digital recording of portable archaeological objects, from a number of late pre-Columbian villages and towns in the mid-central US (c. 1400 CE and from the Egyptian site of Amarna, the Egyptian Pharaoh Akhenaten's capital (c. 1300 BCE.

  8. The Scheme and the Preliminary Test of Object-Oriented Simultaneous 3D Geometric and Physical Change Detection Using GIS-guided Knowledge

    Directory of Open Access Journals (Sweden)

    Chang LI

    2013-07-01

    Full Text Available Current methods of remotely sensed image change detection almost assume that the DEM of the surface objects do not change. However, for the geological disasters areas (such as: landslides, mudslides and avalanches, etc., this assumption does not hold. And the traditional approach is being challenged. Thus, a new theory for change detection needs to be extended from two-dimensional (2D to three-dimensional (3D urgently. This paper aims to present an innovative scheme for change detection method, object-oriented simultaneous three-dimensional geometric and physical change detection (OOS3DGPCD using GIS-guided knowledge. This aim will be reached by realizing the following specific objectives: a to develop a set of automatic multi-feature matching and registration methods; b to propose an approach for simultaneous detecting 3D geometric and physical attributes changes based on the object-oriented strategy; c to develop a quality control method for OOS3DGPCD; d to implement the newly proposed OOS3DGPCD method by designing algorithms and developing a prototype system. For aerial remotely sensed images of YingXiu, Wenchuan, preliminary experimental results of 3D change detection are shown so as to verify our approach.

  9. 3D and Education

    Science.gov (United States)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  10. Tracking 3D Moving Objects Based on GPS/IMU Navigation Solution, Laser Scanner Point Cloud and GIS Data

    Directory of Open Access Journals (Sweden)

    Siavash Hosseinyalamdary

    2015-07-01

    Full Text Available Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM, can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS and Inertial Measurement Unit (IMU navigation solution.

  11. An in-depth spectroscopic examination of molecular bands from 3D hydrodynamical model atmospheres. I. Formation of the G-band in metal-poor dwarf stars

    Science.gov (United States)

    Gallagher, A. J.; Caffau, E.; Bonifacio, P.; Ludwig, H.-G.; Steffen, M.; Spite, M.

    2016-09-01

    Context. Recent developments in the three-dimensional (3D) spectral synthesis code Linfor3D have meant that for the first time, large spectral wavelength regions, such as molecular bands, can be synthesised with it in a short amount of time. Aims: A detailed spectral analysis of the synthetic G-band for several dwarf turn-off-type 3D atmospheres (5850 ≲ Teff [ K ] ≲ 6550, 4.0 ≤ log g ≤ 4.5, - 3.0 ≤ [Fe/H] ≤-1.0) was conducted, under the assumption of local thermodynamic equilibrium. We also examine carbon and oxygen molecule formation at various metallicity regimes and discuss the impact it has on the G-band. Methods: Using a qualitative approach, we describe the different behaviours between the 3D atmospheres and the traditional one-dimensional (1D) atmospheres and how the different physics involved inevitably leads to abundance corrections, which differ over varying metallicities. Spectra computed in 1D were fit to every 3D spectrum to determine the 3D abundance correction. Results: Early analysis revealed that the CH molecules that make up the G-band exhibited an oxygen abundance dependency; a higher oxygen abundance leads to weaker CH features. Nitrogen abundances showed zero impact to CH formation. The 3D corrections are also stronger at lower metallicity. Analysis of the 3D corrections to the G-band allows us to assign estimations of the 3D abundance correction to most dwarf stars presented in the literature. Conclusions: The 3D corrections suggest that A(C) in carbon-enhanced metal-poor (CEMP) stars with high A(C) would remain unchanged, but would decrease in CEMP stars with lower A(C). It was found that the C/O ratio is an important parameter to the G-band in 3D. Additional testing confirmed that the C/O ratio is an equally important parameter for OH transitions under 3D. This presents a clear interrelation between the carbon and oxygen abundances in 3D atmospheres through their molecular species, which is not seen in 1D.

  12. Coherent digital demodulation of single-camera N-projections for 3D-object shape measurement: co-phased profilometry.

    Science.gov (United States)

    Servin, M; Garnica, G; Estrada, J C; Quiroga, A

    2013-10-21

    Fringe projection profilometry is a well-known technique to digitize 3-dimensional (3D) objects and it is widely used in robotic vision and industrial inspection. Probably the single most important problem in single-camera, single-projection profilometry are the shadows and specular reflections generated by the 3D object under analysis. Here a single-camera along with N-fringe-projections is (digital) coherent demodulated in a single-step, solving the shadows and specular reflections problem. Co-phased profilometry coherently phase-demodulates a whole set of N-fringe-pattern perspectives in a single demodulation and unwrapping process. The mathematical theory behind digital co-phasing N-fringe-patterns is mathematically similar to co-phasing a segmented N-mirror telescope.

  13. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    Science.gov (United States)

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning.

  14. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    Science.gov (United States)

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  15. Spatial Carrier Bi-frequency Fourier Transform Profilometry for the 3-D Shape Measurement of Object with Discontinuous Height Steps

    Institute of Scientific and Technical Information of China (English)

    ZHONG Jingang; DI Hongwei; ZHANG Yonglin

    2000-01-01

    The combination of shearing interferometer, Fourier-transform profilometry, phase unwrapping by lookup table method has resulted in a new and more powerful method of measuring surface profile. The technique permits the three-dimensional shape measurement of objects that have discontinuous height steps. Experimental results have demonstrated the validity of the principle.

  16. 3D-Modeling of deformed halite hopper crystals: Object based image analysis and support vector machine, a first evaluation

    Science.gov (United States)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-05-01

    Halite hopper crystals are thought to develop by displacive growth in unconsolidated mud (Gornitz & Schreiber, 1984). The Alpine Haselgebirge, but also e.g. the salt deposits of the Rhine graben (mined at the beginning of the 20th century), comprise hopper crystals with shapes of cuboids, parallelepipeds and rhombohedrons (Görgey, 1912). Obviously, they deformed under oriented stress, which had been tried to reconstruct with respect to the sedimentary layering (Leitner et al., 2013). In the present work, deformed halite hopper crystals embedded in mudrock were automated reconstructed. Object based image analysis (OBIA) has been used successfully in remote sensing for 2D images before. The present study represents the first time that the method was used for reconstruction of three dimensional geological objects. First, manually a reference (gold standard) was created by redrawing contours of the halite crystals on each HRXCT scanning slice. Then, for OBIA, the computer program eCognition was used. For the automated reconstruction a rule set was developed. Thereby, the strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. In a second step, all the objects unsuitable for a structural deformation analysis were dismissed using a support vector machine (SVM) (clusters, polyhalite-coated crystals and spherical halites) The SVM simultaneously drastically reduced the number of halites. From 184 OBIA-objects 67 well shaped remained, which comes close to the number of pre-selected 52 objects. To assess the accuracy of the automated reconstruction, the result before and after SVM was compared to the reference, i.e. the gold standard. State-of the art per-scene statistics were extended to a per-object statistics. Görgey R (1912) Zur Kenntnis der Kalisalzlager von Wittelsheim im Ober-Elsaß. Tschermaks Mineral Petrogr Mitt 31:339-468 Gornitz VM, Schreiber BC (1981) Displacive halite hoppers from the dead sea

  17. Instantaneous 3D EEG Signal Analysis Based on Empirical Mode Decomposition and the Hilbert–Huang Transform Applied to Depth of Anaesthesia

    Directory of Open Access Journals (Sweden)

    Mu-Tzu Shih

    2015-02-01

    Full Text Available Depth of anaesthesia (DoA is an important measure for assessing the degree to which the central nervous system of a patient is depressed by a general anaesthetic agent, depending on the potency and concentration with which anaesthesia is administered during surgery. We can monitor the DoA by observing the patient’s electroencephalography (EEG signals during the surgical procedure. Typically high frequency EEG signals indicates the patient is conscious, while low frequency signals mean the patient is in a general anaesthetic state. If the anaesthetist is able to observe the instantaneous frequency changes of the patient’s EEG signals during surgery this can help to better regulate and monitor DoA, reducing surgical and post-operative risks. This paper describes an approach towards the development of a 3D real-time visualization application which can show the instantaneous frequency and instantaneous amplitude of EEG simultaneously by using empirical mode decomposition (EMD and the Hilbert–Huang transform (HHT. HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMFs. The Hilbert spectral analysis method is then used to obtain instantaneous frequency data. The HHT provides a new method of analyzing non-stationary and nonlinear time series data. We investigate this approach by analyzing EEG data collected from patients undergoing surgical procedures. The results show that the EEG differences between three distinct surgical stages computed by using sample entropy (SampEn are consistent with the expected differences between these stages based on the bispectral index (BIS, which has been shown to be quantifiable measure of the effect of anaesthetics on the central nervous system. Also, the proposed filtering approach is more effective compared to the standard filtering method in filtering out signal noise resulting in more consistent results than those provided by the BIS. The proposed approach is therefore

  18. An in-depth spectroscopic examination of molecular bands from 3D hydrodynamical model atmospheres I. Formation of the G-band in metal-poor dwarf stars

    CERN Document Server

    Gallagher, A J; Bonifacio, P; Ludwig, H -G; Steffen, M; Spite, M

    2016-01-01

    Recent developments in the three-dimensional (3D) spectral synthesis code Linfor3D have meant that, for the first time, large spectral wavelength regions, such as molecular bands, can be synthesised with it in a short amount of time. A detailed spectral analysis of the synthetic G-band for several dwarf turn-off-type 3D atmospheres (5850 <= T_eff [K] <= 6550, 4.0 <= log g <= 4.5, -3.0 <= [Fe/H] <= -1.0) was conducted, under the assumption of local thermodynamic equilibrium. We also examine carbon and oxygen molecule formation at various metallicity regimes and discuss the impact it has on the G-band. Using a qualitative approach, we describe the different behaviours between the 3D atmospheres and the traditional one-dimensional (1D) atmospheres and how the different physics involved inevitably leads to abundance corrections, which differ over varying metallicities. Spectra computed in 1D were fit to every 3D spectrum to determine the 3D abundance correction. Early analysis revealed that the ...

  19. The effect of monocular depth cues on the detection of moving objects by moving observers.

    Science.gov (United States)

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-07-01

    An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer. PMID:27264029

  20. 3D Multi-Object Segmentation of Cardiac MSCT Imaging by using a Multi-Agent Approach

    Science.gov (United States)

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernandez, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed. PMID:18003382

  1. Visual discrimination of rotated 3D objects in Malawi cichlids (Pseudotropheus sp.): a first indication for form constancy in fishes.

    Science.gov (United States)

    Schluessel, V; Kraniotakes, H; Bleckmann, H

    2014-03-01

    Fish move in a three-dimensional environment in which it is important to discriminate between stimuli varying in colour, size, and shape. It is also advantageous to be able to recognize the same structures or individuals when presented from different angles, such as back to front or front to side. This study assessed visual discrimination abilities of rotated three-dimensional objects in eight individuals of Pseudotropheus sp. using various plastic animal models. All models were displayed in two choice experiments. After successful training, fish were presented in a range of transfer tests with objects rotated in the same plane and in space by 45° and 90° to the side or to the front. In one experiment, models were additionally rotated by 180°, i.e., shown back to front. Fish showed quick associative learning and with only one exception successfully solved and finished all experimental tasks. These results provide first evidence for form constancy in this species and in fish in general. Furthermore, Pseudotropheus seemed to be able to categorize stimuli; a range of turtle and frog models were recognized independently of colour and minor shape variations. Form constancy and categorization abilities may be important for behaviours such as foraging, recognition of predators, and conspecifics as well as for orienting within habitats or territories. PMID:23982620

  2. An object-oriented 3D nodal finite element solver for neutron transport calculations in the Descartes project

    Energy Technology Data Exchange (ETDEWEB)

    Akherraz, B.; Lautard, J.J. [CEA Saclay, Dept. Modelisation de Systemes et Structures, Serv. d' Etudes des Reacteurs et de Modelisation Avancee (DMSS/SERMA), 91 - Gif sur Yvette (France); Erhard, P. [Electricite de France (EDF), Dir. de Recherche et Developpement, Dept. Sinetics, 92 - Clamart (France)

    2003-07-01

    In this paper we present two applications of the Nodal finite elements developed by Hennart and del Valle, first to three-dimensional Cartesian meshes and then to two-dimensional Hexagonal meshes. This work has been achieved within the framework of the DESCARTES project, which is a co-development effort by the 'Commissariat a l'Energie Atomique' (CEA) and 'Electricite de France' (EDF) for the development of a toolbox for reactor core calculations based on object oriented programming. The general structure of this project is based on the object oriented method. By using a mapping technique proposed in Schneider's thesis and del Valle, Mund, we show how this structuration allows us an easy implementation of the hexagonal case from the Cartesian case. The main attractiveness of this methodology is the possibility of a pin-by-pin representation by division of each lozenge into smaller ones. Furthermore, we will explore the use of non structured quadrangles to treat the circular geometry within a hexagon. It remains nevertheless, in the hexagonal case, the implementation of the acceleration of the internal iterations by the DSA (Diffusion Synthetic Acceleration) or the TSA. (authors)

  3. Contextual effects of scene on the visual perception of object orientation in depth.

    Directory of Open Access Journals (Sweden)

    Ryosuke Niimi

    Full Text Available We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1. When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2. This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3. Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.

  4. 3D Spectroscopy of Local Luminous Compact Blue Galaxies: Kinematic Maps of a Sample of 22 Objects

    CERN Document Server

    Pérez-Gallego, J; Castillo-Morales, A; Gallego, J; Castander, F J; Garland, C A; Gruel, N; Pisano, D J; Zamorano, J

    2011-01-01

    We use three dimensional optical spectroscopy observations of a sample of 22 local Luminous Compact Blue Galaxies (LCBGs) to create kinematic maps. By means of these, we classify the kinematics of these galaxies into three different classes: rotating disk (RD), perturbed rotation (PR), and complex kinematics (CK). We find 48% are RDs, 28% are PRs, and 24% are CKs. RDs show rotational velocities that range between $\\sim50$ and $\\sim200 km s^{-1}$, and dynamical masses that range between $\\sim1\\times10^{9}$ and $\\sim3\\times10^{10} M_{\\odot}$. We also address the following two fundamental questions through the study of the kinematic maps: \\emph{(i) What processes are triggering the current starbust in LCBGs?} We search our maps of the galaxy velocity fields for signatures of recent interactions and close companions that may be responsible for the enhanced star formation in our sample. We find 5% of objects show evidence of a recent major merger, 10% of a minor merger, and 45% of a companion. This argues in favor...

  5. Objective, comparative assessment of the penetration depth of temporal-focusing microscopy for imaging various organs

    Science.gov (United States)

    Rowlands, Christopher J.; Bruns, Oliver T.; Bawendi, Moungi G.; So, Peter T. C.

    2015-06-01

    Temporal focusing is a technique for performing axially resolved widefield multiphoton microscopy with a large field of view. Despite significant advantages over conventional point-scanning multiphoton microscopy in terms of imaging speed, the need to collect the whole image simultaneously means that it is expected to achieve a lower penetration depth in common biological samples compared to point-scanning. We assess the penetration depth using a rigorous objective criterion based on the modulation transfer function, comparing it to point-scanning multiphoton microscopy. Measurements are performed in a variety of mouse organs in order to provide practical guidance as to the achievable penetration depth for both imaging techniques. It is found that two-photon scanning microscopy has approximately twice the penetration depth of temporal-focusing microscopy, and that penetration depth is organ-specific; the heart has the lowest penetration depth, followed by the liver, lungs, and kidneys, then the spleen, and finally white adipose tissue.

  6. How 3-D Movies Work

    Institute of Scientific and Technical Information of China (English)

    吕铁雄

    2011-01-01

    难度:★★★★☆词数:450 建议阅读时间:8分钟 Most people see out of two eyes. This is a basic fact of humanity,but it’s what makes possible the illusion of depth(纵深幻觉) that 3-D movies create. Human eyes are spaced about two inches apart, meaning that each eye gives the brain a slightly different perspective(透视感)on the same object. The brain then uses this variance to quickly determine an object’s distance.

  7. Infant manual performance during reaching and grasping for objects moving in depth

    Directory of Open Access Journals (Sweden)

    Erik eDomellöf

    2015-08-01

    Full Text Available Few studies have observed investigated manual asymmetries performance in infants when reaching and grasping for objects moving in directions other than across the fronto-parallel plane. The present preliminary study explored manual object-oriented behavioral strategies and hand side preference in 8- and 10-month-old infants during reaching and grasping for objects approaching in depth from three positions (midline, and 27° diagonally from the left, and right, midline. Effects of task constraint by using objects of three different types and two sizes were further examined for behavioral strategies and . The study also involved measurements of hand position opening prior to grasping., and Additionally, assessments of general hand preference by a dedicated handedness test were performed. Regardless of object starting position, the 8-month-old infants predominantly displayed right-handed reaches for objects approaching in depth. In contrast, the older infants showed more varied strategies and performed more ipsilateral reaches in correspondence with the side of the approaching object. Conversely, 10-month-old infants were more successful than the younger infants in grasping the objects, independent of object starting position. The findings support the possibility of a shared underlying mechanism regarding for infant hand use strategies when reaching and grasping for horizontally objects moving in depth are similar to those from earlier studies using objects moving along a horizontal pathand vertically moving objects. Still, initiation times of reaching onset were generally long in the present study, indicating that the object motion paths seemingly affected how the infants perceived the intrinsic properties and spatial locations of the objects, possibly with an effect on motor planning. Findings are further discussed in relation to future investigations of infant reaching and grasping for objects approaching in depth.

  8. Depth

    NARCIS (Netherlands)

    Koenderink, J.J.; Van Doorn, A.J.; Wagemans, J.

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the f

  9. Combined robotic-aided gait training and 3D gait analysis provide objective treatment and assessment of gait in children and adolescents with Acquired Hemiplegia.

    Science.gov (United States)

    Molteni, Erika; Beretta, Elena; Altomonte, Daniele; Formica, Francesca; Strazzer, Sandra

    2015-08-01

    To evaluate the feasibility of a fully objective rehabilitative and assessment process of the gait abilities in children suffering from Acquired Hemiplegia (AH), we studied the combined employment of robotic-aided gait training (RAGT) and 3D-Gait Analysis (GA). A group of 12 patients with AH underwent 20 sessions of RAGT in addition to traditional manual physical therapy (PT). All the patients were evaluated before and after the training by using the Gross Motor Function Measures (GMFM), the Functional Assessment Questionnaire (FAQ), and the 6 Minutes Walk Test. They also received GA before and after RAGT+PT. Finally, results were compared with those obtained from a control group of 3 AH children who underwent PT only. After the training, the GMFM and FAQ showed significant improvement in patients receiving RAGT+PT. GA highlighted significant improvement in stance symmetry and step length of the affected limb. Moreover, pelvic tilt increased, and hip kinematics on the sagittal plane revealed statistically significant increase in the range of motion during the hip flex-extension. Our data suggest that the combined program RAGT+PT induces improvements in functional activities and gait pattern in children with AH, and it demonstrates that the combined employment of RAGT and 3D-GA ensures a fully objective rehabilitative program. PMID:26737310

  10. Joint spatial-depth feature pooling for RGB-D object classification

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    RGB-D camera can provide effective support with additional depth cue for many RGB-D perception tasks beyond traditional RGB information. However, current feature representations based on RGB-D camera utilize depth information only to extract local features, without considering it for the improvem......RGB-D camera can provide effective support with additional depth cue for many RGB-D perception tasks beyond traditional RGB information. However, current feature representations based on RGB-D camera utilize depth information only to extract local features, without considering...... it for the improvement of robustness and discriminability of the feature representation by merging depth cues into feature pooling. Spatial pyramid model (SPM) has become the standard protocol to split 2D image plane into sub-regions for feature pooling in RGB-D object classification. We argue that SPM may...

  11. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  12. Human Object Recognition Using Colour and Depth Information from an RGB-D Kinect Sensor

    OpenAIRE

    Southwell, Benjamin John; Fang, Gu

    2013-01-01

    Human object recognition and tracking is important in robotics and automation. The Kinect sensor and its SDK have provided a reliable human tracking solution where a constant line of sight is maintained. However, if the human object is lost from sight during the tracking, the existing method cannot recover and resume tracking the previous object correctly. In this paper, a human recognition method is developed based on colour and depth information that is provided from any RGB‐D sensor. In pa...

  13. 3D printing for dummies

    CERN Document Server

    Hausman, Kalani Kirk

    2014-01-01

    Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for

  14. Method for the determination of the modulation transfer function (MTF) in 3D x-ray imaging systems with focus on correction for finite extent of test objects

    Science.gov (United States)

    Schäfer, Dirk; Wiegert, Jens; Bertram, Matthias

    2007-03-01

    It is well known that rotational C-arm systems are capable of providing 3D tomographic X-ray images with much higher spatial resolution than conventional CT systems. Using flat X-ray detectors, the pixel size of the detector typically is in the range of the size of the test objects. Therefore, the finite extent of the "point" source cannot be neglected for the determination of the MTF. A practical algorithm has been developed that includes bias estimation and subtraction, averaging in the spatial domain, and correction for the frequency content of the imaged bead or wire. Using this algorithm, the wire and the bead method are analyzed for flat detector based 3D X-ray systems with the use of standard CT performance phantoms. Results on both experimental and simulated data are presented. It is found that the approximation of applying the analysis of the wire method to a bead measurement is justified within 3% accuracy up to the first zero of the MTF.

  15. The practice of physics teaching to carry out the 3D object%物理教学落实三维目标的实践研究

    Institute of Scientific and Technical Information of China (English)

    魏淑芳

    2015-01-01

    我国高职物理教育随着新课程的改革,对三维目标的重视程度已经显著提高,但是对于如何在高职物理教学中理解并实施三维目标,当前学者还有很多不同的见解,很多高职院校依然重视物理技能和显性知识的学习,还没有重视在物理教学中培养学生的情感太和的科学的态度,这在很大程度上影响了学生养成科学素养。本文立足于三维目标的理论基础,而且结合高职物理教学的实际,以实际的案例分析了在高职物理教学中三维目标的落实情况,从而促进新课程的改革。%Higher vocational physical education along with the new curriculum reform in our country, pay more and more attention to the three dimensional goal, but how to understand in the higher vocational physics teaching and implementation of 3D object, the current scholars stil has a lot of different opinions, many higher vocational coleges stil attaches great importance to the physical skils and explicit knowledge learning, has not been valued in the physical teaching to cultivate students emotional scientific attitude, this to a large extent affected the students to form scientific literacy. Based on the theoretical basis for 3D object, combines the actual conditions of higher vocational physics teaching, with the actual case analysis of higher vocational physics teaching, the 3D object to carry out the situation, so as to promote the new curriculum reform.

  16. Depth position detection for fast moving objects in sealed microchannel utilizing chromatic aberration.

    Science.gov (United States)

    Lin, Che-Hsin; Su, Shin-Yu

    2016-01-01

    This research reports a novel method for depth position measurement of fast moving objects inside a microfluidic channel based on the chromatic aberration effect. Two band pass filters and two avalanche photodiodes (APD) are used for rapid detecting the scattered light from the passing objected. Chromatic aberration results in the lights of different wavelengths focus at different depth positions in a microchannel. The intensity ratio of two selected bands of 430 nm-470 nm (blue band) and 630 nm-670 nm (red band) scattered from the passing object becomes a significant index for the depth information of the passing object. Results show that microspheres with the size of 20 μm and 2 μm can be resolved while using PMMA (Abbe number, V = 52) and BK7 (V = 64) as the chromatic aberration lens, respectively. The throughput of the developed system is greatly enhanced by the high sensitive APDs as the optical detectors. Human erythrocytes are also successfully detected without fluorescence labeling at a high flow velocity of 2.8 mm/s. With this approach, quantitative measurement for the depth position of rapid moving objects inside a sealed microfluidic channel can be achieved in a simple and low cost way. PMID:26858810

  17. 3D Projection Installations

    DEFF Research Database (Denmark)

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article...... contributes to the understanding of the distinctive characteristics of such a new medium, and identifies three strategies for designing 3-D projection installations: establishing space; interplay between the digital and the physical; and transformation of materiality. The principal empirical case, From...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  18. ToF-SIMS depth profiling of cells: z-correction, 3D imaging, and sputter rate of individual NIH/3T3 fibroblasts.

    Science.gov (United States)

    Robinson, Michael A; Graham, Daniel J; Castner, David G

    2012-06-01

    Proper display of three-dimensional time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging data of complex, nonflat samples requires a correction of the data in the z-direction. Inaccuracies in displaying three-dimensional ToF-SIMS data arise from projecting data from a nonflat surface onto a 2D image plane, as well as possible variations in the sputter rate of the sample being probed. The current study builds on previous studies by creating software written in Matlab, the ZCorrectorGUI (available at http://mvsa.nb.uw.edu/), to apply the z-correction to entire 3D data sets. Three-dimensional image data sets were acquired from NIH/3T3 fibroblasts by collecting ToF-SIMS images, using a dual beam approach (25 keV Bi(3)(+) for analysis cycles and 20 keV C(60)(2+) for sputter cycles). The entire data cube was then corrected by using the new ZCorrectorGUI software, producing accurate chemical information from single cells in 3D. For the first time, a three-dimensional corrected view of a lipid-rich subcellular region, possibly the nuclear membrane, is presented. Additionally, the key assumption of a constant sputter rate throughout the data acquisition was tested by using ToF-SIMS and atomic force microscopy (AFM) analysis of the same cells. For the dried NIH/3T3 fibroblasts examined in this study, the sputter rate was found to not change appreciably in x, y, or z, and the cellular material was sputtered at a rate of approximately 10 nm per 1.25 × 10(13) ions C(60)(2+)/cm(2). PMID:22530745

  19. Combining depth analysis with surface morphology analysis to analyse the prehistoric painted pottery from Majiayao Culture by confocal 3D-XRF

    Science.gov (United States)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Lin, Xue; Chen, Man; Peng, Shiqi; Yang, Kui; Wang, Jinbang

    2016-04-01

    The Majiayao Culture (3300 BC-2900 BC) formed one of the three painted pottery centres of the Yellow River basin, China, in prehistoric times. Painted pottery from this period is famous for its exquisite workmanship and meticulous painting. Studying the layer structure and element distribution of the paint on the pottery is conducive to investigating its workmanship, which is important for archaeological research. However, the most common analysis methods are destructive. To investigate the layers of paint on the pottery nondestructively, a confocal three-dimensional micro-X-ray fluorescence set-up combined with two individual polycapillary lenses has been used to analyse two painted pottery fragments. Nondestructive elemental depth analyses and surface topographic analysis were performed. The elemental depth profiles of Mn, Fe and Ca obtained from these measurements were consistent with those obtained using an optical microscope. The depth profiles show that there are layer structures in two samples. The images show that the distribution of Ca is approximately homogeneous in both painted and unpainted regions. In contrast, Mn appeared only in the painted regions. Meanwhile, the distributions of Fe in the painted and unpainted regions were not the same. The surface topographic shows that the pigment of dark-brown region was coated above the brown region. These conclusions allowed the painting process to be inferred.

  20. An approach based on defense-in-depth and diversity (3D) for the reliability assessment of digital instrument and control systems of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Paulo Adriano da; Saldanha, Pedro L.C., E-mail: pasilva@cnen.gov.b, E-mail: Saldanha@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coord. Geral de Reatores Nucleares; Melo, Paulo F. Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao em Engenharia. Programa de Engenharia Nuclear; Araujo, Ademir L. de [Associacao Brasileira de Ensino Universitario (UNIABEU), Angra dos Reis, RJ (Brazil)

    2011-07-01

    The adoption of instrumentation and control (I and C) digital technology has been slower in nuclear power plants. The reason has been unfruitful efforts to obtain evidence in order to prove that I and C systems can be used in nuclear safety systems, for example, the Reactor Protection System (RPS), ensuring the proper operation of all its functions. This technology offers a potential improvement for safety and reliability. However, there still no consensus about the model to be adopted for digital systems software to be used in reliability studies. This paper presents the 3D methodology approach to assess digital I and C reliability. It is based on the study of operational events occurring in NPPs. It is easy to identify, in general, the level of I and C system reliability, showing its key vulnerabilities, enabling to trace regulatory actions to minimize or avoid them. This approach makes it possible to identify the main types of digital I and C system failure, with the potential for common cause failures as well as evaluating the dominant failure modes. The MAFIC-D software was developed to assist the implementation of the relationships between the reliability criteria, the analysis of relationships and data collection. The results obtained through this tool proved to be satisfactory and complete the process of regulatory decision-making from licensing I and C digital of NPPs and call still be used to monitor the performance of I and C digital post-licensing during the lifetime of the system, providing the basis for the elaboration of checklists of regulatory inspections. (author)

  1. A framework for inverse planning of beam-on times for 3D small animal radiotherapy using interactive multi-objective optimisation

    International Nuclear Information System (INIS)

    Advances in precision small animal radiotherapy hardware enable the delivery of increasingly complicated dose distributions on the millimeter scale. Manual creation and evaluation of treatment plans becomes difficult or even infeasible with an increasing number of degrees of freedom for dose delivery and available image data. The goal of this work is to develop an optimisation model that determines beam-on times for a given beam configuration, and to assess the feasibility and benefits of an automated treatment planning system for small animal radiotherapy.The developed model determines a Pareto optimal solution using operator-defined weights for a multiple-objective treatment planning problem. An interactive approach allows the planner to navigate towards, and to select the Pareto optimal treatment plan that yields the most preferred trade-off of the conflicting objectives. This model was evaluated using four small animal cases based on cone-beam computed tomography images. Resulting treatment plan quality was compared to the quality of manually optimised treatment plans using dose-volume histograms and metrics.Results show that the developed framework is well capable of optimising beam-on times for 3D dose distributions and offers several advantages over manual treatment plan optimisation. For all cases but the simple flank tumour case, a similar amount of time was needed for manual and automated beam-on time optimisation. In this time frame, manual optimisation generates a single treatment plan, while the inverse planning system yields a set of Pareto optimal solutions which provides quantitative insight on the sensitivity of conflicting objectives. Treatment planning automation decreases the dependence on operator experience and allows for the use of class solutions for similar treatment scenarios. This can shorten the time required for treatment planning and therefore increase animal throughput. In addition, this can improve treatment standardisation and

  2. Fluid migration associated with allochthonous salt in the Northern Gulf of mexico: an analysis using 3D depth migrated seismic data

    Energy Technology Data Exchange (ETDEWEB)

    House, William H.; Pritchett, John A. [Amoco Production Co. (United States)

    1995-12-31

    The emplacement of allochthonous salt bodies in the Northern Gulf of Mexico, and their subsequent deformation to form secondary salt features involves the upward movement of salt along discrete feeder conduits. The detachment of allochthonous salt from a deeper source results in the collapse of these conduits. Structural disruption associated with this collapse creates a permeability pathway to allow enhanced fluid migration from depth into shallower section. Some of the high pressure fluids migration upward along these permeability conduits will impinge on a permeability barrier created by the horizontal to sub-horizontal base of allochthonous salt sheets. Additional high pressure fluids associated with shale compaction and dewatering will also move upward to the base of salt permeability barrier. The constant influx of high pressure fluids into the zone immediately below salt prevents the shale in this zone from undergoing normal compaction, resulting in the formation of a lithologically distinct gumbo zone. This gumbo zone has been encountered in many of the subsalt wells drilled in the Gulf of Mexico. Abnormally high pore pressures are often associated with this gumbo zone beneath the salt sheets covering the southern shelf area, offshore Louisiana. Formation pressure gradients within this zone can be as much as 0.04 psi/ft (0.8 ppg) above the regional pressure gradient. (author). 4 refs., 1 fig

  3. CNS Orientations, Safety Objectives and Implementation of the Defence in Depth Concept

    International Nuclear Information System (INIS)

    Full text: The 6th Review Meeting of the Convention on Nuclear Safety (CNS) is convened in Vienna next year for two weeks from Monday March 24th to Friday April 4th 2014. The consequences and the lessons learnt from the accident that occurred at the Fukushima Daiichi nuclear power plant will be a major issue. The 2nd Extraordinary Meeting of the CNS in August 2012 was totally devoted to the Fukushima Daiichi accident. One of its main conclusions was Conclusion 17 included in the summary report which says: ''Nuclear power plants should be designed, constructed and operated with the objectives of preventing accidents and, should an accident occur, mitigating its effects and avoiding off-site contamination. The Contracting Parties also noted that regulatory authorities should ensure that these objectives are applied in order to identify and implement appropriate safety improvements at existing plants''. The wording of the sentences of Conclusion 17 dedicated, the first one to new built reactors, the second one to existing plants, can be improved and clarified. But obviously the issue of the off-site consequences of an accident is fundamental. So the in-depth question comes: what can and should be done to achieve these safety objectives? And in particular how to improve the definition and then the implementation of the Defence in Depth Concept? From my point of view, this is clearly the main issue of this Conference. (author)

  4. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    Science.gov (United States)

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  5. An analysis of TA-Student Interaction and the Development of Concepts in 3-d Space Through Language, Objects, and Gesture in a College-level Geoscience Laboratory

    Science.gov (United States)

    King, S. L.

    2015-12-01

    The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural

  6. Radar Plant and Measurement Technique for Determination of the Orientation and the Depth of Buried Objects

    DEFF Research Database (Denmark)

    1999-01-01

    A plant for generation of information indicative of the depth and the orientation of an object positioned below the surface of the ground is adapted to use electromagnetic radiation emitted from and received by an antenna system associated with the plant. The plant has a transmitter and a receiver...... for generation of the electromagnetic radiation in cooperation with the antenna system mentioned and for reception of the electromagnetic radiation reflected by the object in cooperation with the antenna system, respectively. The antenna system includes a plurality of individual antenna elements such as dipole...... the antenna system and thus polarizing the electromagnetic field around or in relation to the geometric center of the antenna system....

  7. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  8. Tangible 3D modeling of coherent and themed structures

    DEFF Research Database (Denmark)

    Walther, Jeppe Ullè; Bærentzen, J. Andreas; Aanæs, Henrik

    2016-01-01

    , this turns the task of 3D modeling into a playful activity that hardly requires any learning on the part of the user. The blocks are registered using a depth camera and entered into the cube graph where each block is a node and adjacent blocks are connected by edges. From the cube graph, we transform......We present CubeBuilder, a system for interactive, tangible 3D shape modeling. CubeBuilder allows the user to create a digital 3D model by placing physical, non-interlocking cubic blocks. These blocks may be placed in a completely arbitrary fashion and combined with other objects. In effect...

  9. 3D Animation Essentials

    CERN Document Server

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  10. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  11. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    Science.gov (United States)

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  12. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  13. Automatic detection of artifacts in converted S3D video

    Science.gov (United States)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  14. Photopolymers in 3D printing applications

    OpenAIRE

    Pandey, Ramji

    2014-01-01

    3D printing is an emerging technology with applications in several areas. The flexibility of the 3D printing system to use variety of materials and create any object makes it an attractive technology. Photopolymers are one of the materials used in 3D printing with potential to make products with better properties. Due to numerous applications of photopolymers and 3D printing technologies, this thesis is written to provide information about the various 3D printing technologies with particul...

  15. Natural fibre composites for 3D Printing

    OpenAIRE

    Pandey, Kapil

    2015-01-01

    3D printing has been common option for prototyping. Not all the materials are suitable for 3D printing. Various studies have been done and still many are ongoing regarding the suitability of the materials for 3D printing. This thesis work discloses the possibility of 3D printing of certain polymer composite materials. The main objective of this thesis work was to study the possibility for 3D printing the polymer composite material composed of natural fibre composite and various different ...

  16. Solid works 3D

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Cheol Yeong

    2004-02-15

    This book explains modeling of solid works 3D and application of 3D CAD/CAM. The contents of this book are outline of modeling such as CAD and 2D and 3D, solid works composition, method of sketch, writing measurement fixing, selecting projection, choosing condition of restriction, practice of sketch, making parts, reforming parts, modeling 3D, revising 3D modeling, using pattern function, modeling necessaries, assembling, floor plan, 3D modeling method, practice floor plans for industrial engineer data aided manufacturing, processing of CAD/CAM interface.

  17. Solid works 3D

    International Nuclear Information System (INIS)

    This book explains modeling of solid works 3D and application of 3D CAD/CAM. The contents of this book are outline of modeling such as CAD and 2D and 3D, solid works composition, method of sketch, writing measurement fixing, selecting projection, choosing condition of restriction, practice of sketch, making parts, reforming parts, modeling 3D, revising 3D modeling, using pattern function, modeling necessaries, assembling, floor plan, 3D modeling method, practice floor plans for industrial engineer data aided manufacturing, processing of CAD/CAM interface.

  18. 3D modelling for multipurpose cadastre

    NARCIS (Netherlands)

    Abduhl Rahman, A.; Van Oosterom, P.J.M.; Hua, T.C.; Sharkawi, K.H.; Duncan, E.E.; Azri, N.; Hassan, M.I.

    2012-01-01

    Three-dimensional (3D) modelling of cadastral objects (such as legal spaces around buildings, around utility networks and other spaces) is one of the important aspects for a multipurpose cadastre (MPC). This paper describes the 3D modelling of the objects for MPC and its usage to the knowledge of 3D

  19. Color 3D Reverse Engineering

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper presents a principle and a method of col or 3D laser scanning measurement. Based on the fundamental monochrome 3D measureme nt study, color information capture, color texture mapping, coordinate computati on and other techniques are performed to achieve color 3D measurement. The syste m is designed and composed of a line laser light emitter, one color CCD camera, a motor-driven rotary filter, a circuit card and a computer. Two steps in captu ring object's images in the measurement process: Firs...

  20. Calibrating a depth camera but ignoring it for SLAM

    OpenAIRE

    Castro, Daniel Herrera

    2014-01-01

    Recent improvements in resolution, accuracy, and cost have made depth cameras a very popular alternative for 3D reconstruction and navigation. Thus, accurate depth camera calibration a very relevant aspect of many 3D pipelines. We explore what are the limits of a practical depth camera calibration algorithm: how to accurately calibrate a noisy depth camera without a precise calibration object and without using brightness or depth discontinuities. We present an algorithm that uses an external ...

  1. Labeling 3D scenes for Personal Assistant Robots

    OpenAIRE

    Koppula, Hema Swetha; Anand, Abhishek; Joachims, Thorsten; Saxena, Ashutosh

    2011-01-01

    Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. We use this data to build 3D point clouds of a full scene. In this paper, we address the task of labeling objects in this 3D point cloud of a complete indoor scene such as an office. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a...

  2. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... beamforming. This is achieved partly because synthetic aperture imaging removes the limitation of a fixed transmit focal depth and instead enables dynamic transmit focusing. Lately, the major ultrasound companies have produced ultrasound scanners using 2-D transducer arrays with enough transducer elements...

  3. 3d-3d correspondence revisited

    Science.gov (United States)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  4. IZDELAVA TISKALNIKA 3D

    OpenAIRE

    Brdnik, Lovro

    2015-01-01

    Diplomsko delo analizira trenutno stanje 3D tiskalnikov na trgu. Prikazan je razvoj in principi delovanja 3D tiskalnikov. Predstavljeni so tipi 3D tiskalnikov, njihove prednosti in slabosti. Podrobneje je predstavljena zgradba in delovanje koračnih motorjev. Opravljene so meritve koračnih motorjev. Opisana je programska oprema za rokovanje s 3D tiskalniki in komponente, ki jih potrebujemo za izdelavo. Diploma se oklepa vprašanja, ali je izdelava 3D tiskalnika bolj ekonomična kot pa naložba v ...

  5. 3D annotation and manipulation of medical anatomical structures

    Science.gov (United States)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  6. Influence of object location in cone beam computed tomography (NewTom 5G and 3D Accuitomo 170) on gray value measurements at an implant site

    NARCIS (Netherlands)

    A. Parsa; N. Ibrahim; B. Hassan; P. van der Stelt; D. Wismeijer

    2014-01-01

    Objectives The aim of this study was to determine the gray value variation at an implant site with different object location within the selected field of view (FOV) in two cone beam computed tomography (CBCT) scanners. Methods A 1-cm-thick section from the edentulous region of a dry human mandible w

  7. Remote 3D Medical Consultation

    Science.gov (United States)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  8. Single-shot 3D motion picture camera with a dense point cloud

    CERN Document Server

    Willomitzer, Florian

    2016-01-01

    We introduce a method and a 3D-camera for single-shot 3D shape measurement, with unprecedented features: The 3D-camera does not rely on pattern codification and acquires object surfaces at the theoretical limit of the information efficiency: Up to 30% of the available camera pixels display independent (not interpolated) 3D points. The 3D-camera is based on triangulation with two properly positioned cameras and a projected multi-line pattern, in combination with algorithms that solve the ambiguity problem. The projected static line pattern enables 3D-acquisition of fast processes and the take of 3D-motion-pictures. The depth resolution is at its physical limit, defined by electronic noise and speckle noise. The requisite low cost technology is simple.

  9. 3D Printing and Its Urologic Applications.

    Science.gov (United States)

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology.

  10. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    Science.gov (United States)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  11. 3D laptop for defense applications

    Science.gov (United States)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  12. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    Science.gov (United States)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  13. Crowded Field 3D Spectroscopy

    CERN Document Server

    Becker, T; Roth, M M; Becker, Thomas; Fabrika, Sergei; Roth, Martin M.

    2003-01-01

    The quantitative spectroscopy of stellar objects in complex environments is mainly limited by the ability of separating the object from the background. Standard slit spectroscopy, restricting the field of view to one dimension, is obviously not the proper technique in general. The emerging Integral Field (3D) technique with spatially resolved spectra of a two-dimensional field of view provides a great potential for applying advanced subtraction methods. In this paper an image reconstruction algorithm to separate point sources and a smooth background is applied to 3D data. Several performance tests demonstrate the photometric quality of the method. The algorithm is applied to real 3D observations of a sample Planetary Nebula in M31, whose spectrum is contaminated by the bright and complex galaxy background. The ability of separating sources is also studied in a crowded stellar field in M33.

  14. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    Science.gov (United States)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  15. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    Science.gov (United States)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  16. 3D virtuel udstilling

    DEFF Research Database (Denmark)

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  17. MPML3D: Scripting Agents for the 3D Internet.

    Science.gov (United States)

    Prendinger, Helmut; Ullrich, Sebastian; Nakasone, Arturo; Ishizuka, Mitsuru

    2011-05-01

    The aim of this paper is two-fold. First, it describes a scripting language for specifying communicative behavior and interaction of computer-controlled agents ("bots") in the popular three-dimensional (3D) multiuser online world of "Second Life" and the emerging "OpenSimulator" project. While tools for designing avatars and in-world objects in Second Life exist, technology for nonprogrammer content creators of scenarios involving scripted agents is currently missing. Therefore, we have implemented new client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and nonverbal behavior of interacting animated agents. Second, the paper compares Second Life and OpenSimulator platforms and discusses the merits and limitations of each from the perspective of agent control. Here, we also conducted a small study that compares the network performance of both platforms.

  18. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    Science.gov (United States)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  19. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    Science.gov (United States)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  20. Objects' 3D Modeling in Virtual Cockpit System%虚拟座舱系统中的三维建模方法

    Institute of Scientific and Technical Information of China (English)

    翟正军; 秦晓红; 李宗明

    2001-01-01

    According to the peculiarity of virtual models in the virtual cockpit system, this paper expatiates on the methods of regular and irregular objects' geometric modeling and the model reduction.%本文针对虚拟座舱系统中虚拟模型的特点,分别对规则模型和非规则模型的建模算法进行了深入研究,实现了三维真实感模型的生成与简化。

  1. Clinical Assessment of Stereoacuity and 3-D Stereoscopic Entertainment

    Science.gov (United States)

    Tidbury, Laurence P.; Black, Robert H.; O’Connor, Anna R.

    2015-01-01

    Abstract Background/Aims: The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS. Methods: Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips. Results: As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16–62 years) decreased (eg, 110”, 280”, 340”, and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident. Conclusion: If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made. PMID:26669421

  2. Blender 3D cookbook

    CERN Document Server

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  3. FastScript3D - A Companion to Java 3D

    Science.gov (United States)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  4. Markerless 3D Face Tracking

    DEFF Research Database (Denmark)

    Walder, Christian; Breidt, Martin; Bulthoff, Heinrich;

    2009-01-01

    We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently...... combining local regressors using nearest neighbor searches. Both these functions act on the 4D space of 3D plus time, and use temporal information to handle the noise in individual scans. After interactive registration of a template mesh to the first frame, it is then automatically deformed to track...... the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects...

  5. Dimensional accuracy of 3D printed vertebra

    Science.gov (United States)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  6. User-centered 3D geovisualisation

    DEFF Research Database (Denmark)

    Nielsen, Anette Hougaard

    2004-01-01

    . In a broader perspective, the overall aim is to develop a language in 3D Geovisualisation gained through usability projects and the development of a theoretical background. A conceptual level of user-centered 3D Geovisualisation is introduced by applying a categorisation originating from Virtual Reality......3D Geovisualisation is a multidisciplinary science mainly utilizing geographically related data, developing software systems for 3D visualisation and producing relevant models. In this paper the connection between geoinformation stored as 3D objects and the end user is of special interest...

  7. Three-dimensional focusing inversion of gravity gradient tensor data based on depth weighting%基于深度加权的重力梯度张量数据的3D 聚焦反演

    Institute of Scientific and Technical Information of China (English)

    杨娇娇; 刘展; 陈晓红; 徐凯军

    2015-01-01

    针对重力梯度数据聚焦反演结果中存在的“上漂”现象,在经典 Tikhonov 正则化理论框架下,引入最小支撑泛函数对反演模型进行约束以避免反问题解的不稳定,并针对重力梯度数据聚焦反演中存在的趋肤效应,在模型目标函数中引入指数深度加权函数。通过理论模型,对部分重力梯度张量分量进行了单独以及联合聚焦反演,验证了基于深度加权的聚焦反演方法的有效性,并将该反演方法运用到涩北一号气田区的实际数据中,反演结果较好地反映出气田位置。%The focusing inversion method of gravity gradient tensor data based on depth weighting was pro-posed to avoid “resting on” phenomenon of inversion results.In the classic Tikhonov regularization theory frame-work, a minimum support functional was introduced to constrain inversion model, which can avoid instability of in-verse problem solution.And the index depth weighting function was also added in the model objective function to o-vercome the accumulation of density occurring at shallow depths.Some single components of gravity gradient tensor and their joint-components of theoretical models were inverted based on regularized focusing inversion, which proves the validity of the regularized focusing inversion method with depth weighting function.The proposed method was applied to the practical data of Sebei 1 gas field, and the inversion results could well reflect the position of the gas field.

  8. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  9. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  10. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  11. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  12. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... to wonder what, if any, effect the technology has on your eyes. Is 3-D technology healthy ... 3-D, which may indicate that the viewer has a problem with focusing or depth perception. Also, ...

  13. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  14. From 3D view to 3D print

    Science.gov (United States)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  15. Optical tissue clearing improves usability of optical coherence tomography (OCT) for high-throughput analysis of the internal structure and 3D morphology of small biological objects such as vertebrate embryos

    DEFF Research Database (Denmark)

    Thrane, Lars; Jørgensen, Thomas Martini; Männer, Jörg

    2014-01-01

    sections through small biological objects at high resolutions. However, due to light scattering within biological tissues, the quality of OCT images drops significantly with increasing penetration depth of the light beam. We show that optical clearing of fixed embryonic organs with methyl benzoate can...

  16. 3D modelling for multipurpose cadastre

    OpenAIRE

    Abduhl Rahman, A.; P. J. M. Van Oosterom; T. C. Hua; Sharkawi, K.H.; E. E. Duncan; Azri, N.; Hassan, M. I.

    2012-01-01

    Three-dimensional (3D) modelling of cadastral objects (such as legal spaces around buildings, around utility networks and other spaces) is one of the important aspects for a multipurpose cadastre (MPC). This paper describes the 3D modelling of the objects for MPC and its usage to the knowledge of 3D cadastre since more and more related agencies attempt to develop or embed 3D components into the MPC. We also intend to describe the initiative by Malaysian national mapping and cadastral agency (...

  17. 3D-PRINTING OF BUILD OBJECTS

    OpenAIRE

    M. V. Savytskyi; SHATOV S. V.; Ozhyshchenko, O. A.

    2016-01-01

    Raising of problem. Today, in all spheres of our life we can constate the permanent search for new, modern methods and technologies that meet the principles of sustainable development. New approaches need to be, on the one hand more effective in terms of conservation of exhaustible resources of our planet, have minimal impact on the environment and on the other hand to ensure a higher quality of the final product. Construction is not exception. One of the new promising technology ...

  18. Radiochromic 3D Detectors

    Science.gov (United States)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  19. 3D Hilbert Space Filling Curves in 3D City Modeling for Faster Spatial Queries

    DEFF Research Database (Denmark)

    Ujang, Uznir; Antón Castro, Francesc/François; Azri, Suhaibah;

    2014-01-01

    objects. In this research, the authors propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA......) or Hilbert mappings, in this research, they extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested for single object, nearest neighbor and range search queries using a CityGML dataset of 1,000 building blocks and the results...... are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a sub-interval of the ([0,1]) interval to the corresponding...

  20. Herramientas SIG 3D

    Directory of Open Access Journals (Sweden)

    Francisco R. Feito Higueruela

    2010-04-01

    Full Text Available Applications of Geographical Information Systems on several Archeology fields have been increasing during the last years. Recent avances in these technologies make possible to work with more realistic 3D models. In this paper we introduce a new paradigm for this system, the GIS Thetrahedron, in which we define the fundamental elements of GIS, in order to provide a better understanding of their capabilities. At the same time the basic 3D characteristics of some comercial and open source software are described, as well as the application to some samples on archeological researchs

  1. TOWARDS: 3D INTERNET

    OpenAIRE

    Ms. Swapnali R. Ghadge

    2013-01-01

    In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot o...

  2. Bootstrapping 3D fermions

    Science.gov (United States)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  3. Interaktiv 3D design

    DEFF Research Database (Denmark)

    Villaume, René Domine; Ørstrup, Finn Rude

    2002-01-01

    Projektet undersøger potentialet for interaktiv 3D design via Internettet. Arkitekt Jørn Utzons projekt til Espansiva blev udviklet som et byggesystem med det mål, at kunne skabe mangfoldige planmuligheder og mangfoldige facade- og rumudformninger. Systemets bygningskomponenter er digitaliseret som...... 3D elementer og gjort tilgængelige. Via Internettet er det nu muligt at sammenstille og afprøve en uendelig  række bygningstyper som  systemet blev tænkt og udviklet til....

  4. Simultaneous Estimation of Material Properties and Pose for Deformable Objects from Depth and Color Images

    DEFF Research Database (Denmark)

    Fugl, Andreas Rune; Jordt, Andreas; Petersen, Henrik Gordon;

    2012-01-01

    In this paper we consider the problem of estimating 6D pose and material properties of a deformable object grasped by a robot grip- per. To estimate the parameters we minimize an error function incorpo- rating visual and physical correctness. Through simulated and real-world experiments we demons...... demonstrate that we are able to find realistic 6D poses and elasticity parameters like Young’s modulus. This makes it possible to perform subsequent manipulation tasks, where accurate modelling of the elastic behaviour is important....

  5. Spatial data modelling for 3D GIS

    CERN Document Server

    Abdul-Rahman, Alias

    2007-01-01

    This book covers fundamental aspects of spatial data modelling specifically on the aspect of three-dimensional (3D) modelling and structuring. Realisation of ""true"" 3D GIS spatial system needs a lot of effort, and the process is taking place in various research centres and universities in some countries. The development of spatial data modelling for 3D objects is the focus of this book.

  6. Tangible 3D Modelling

    DEFF Research Database (Denmark)

    Hejlesen, Aske K.; Ovesen, Nis

    2012-01-01

    This paper presents an experimental approach to teaching 3D modelling techniques in an Industrial Design programme. The approach includes the use of tangible free form models as tools for improving the overall learning. The paper is based on lecturer and student experiences obtained through facil...

  7. 3D Harmonic Echocardiography:

    NARCIS (Netherlands)

    M.M. Voormolen

    2007-01-01

    textabstractThree dimensional (3D) echocardiography has recently developed from an experimental technique in the ’90 towards an imaging modality for the daily clinical practice. This dissertation describes the considerations, implementation, validation and clinical application of a unique

  8. Virtual Realization using 3D Password

    Directory of Open Access Journals (Sweden)

    A.B.Gadicha

    2012-03-01

    Full Text Available Current authentication systems suffer from many weaknesses. Textual passwords are commonly used; however, users do not follow their requirements. Users tend to choose meaningful words from dictionaries, which make textual passwords easy to break and vulnerable to dictionary or brute force attacks. Many available graphical passwords have a password space that is less than or equal to the textual password space. Smart cards or tokens can be stolen. Many biometric authentications have been proposed; however, users tend to resist using biometrics because of their intrusiveness and the effect on their privacy. Moreover, biometrics cannot be revoked. In this paper, we present and evaluate our contribution, i.e., the 3D password. The 3D password is a multifactor authentication scheme. To be authenticated, we present a 3D virtual environment where the user navigates and interacts with various objects. The sequence of actions and interactions toward the objects inside the 3D environment constructs the user’s 3D password. The 3D password can combine most existing authentication schemes such as textual passwords, graphical passwords, and various types of biometrics into a 3D virtual environment. The design of the 3D virtual environment and the type of objects selected determine the 3D password key space.

  9. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  10. Using an electromagnetic induction sensor to estimate mass and depth of metal objects in a former battlefield

    Science.gov (United States)

    Smetryns, Marthe; Saey, Timothy; Note, Nicolas; Van Meirvenne, Marc

    2016-04-01

    Electromagnetic induction (EMI) sensors are used to perform a non-invasive geophysical survey of land, revealing electrical and magnetic properties of the soil. The technique is used for a variety of agricultural and archaeological purposes to map the soil and locate buried archaeological objects. Besides this, EMI sensors have proven effective to detect metal objects, like the metal remains of the First World War (WW1) in the Western part of Belgium. Most EMI sensors employed for metal detection rely on a single or multiple signal(s) coming from one receiver coil. In this research a multiple coil EMI sensor was used to survey several fields in the former war zone of WW1. This sensor, the DUALEM-21S sensor, consists of one transmitter and four receiver coils leading to four simultaneous measurements of the electric and magnetic properties of the soil. After mapping the fields, the possible metal objects were delineated based on a combination of all electrical measurements and safely excavated. By combining the signals from the different coil configurations, depth intervals for the buried metal objects were assigned to all selected anomalies. This way the metal objects could be located either within the plough layer (0 - 0.45 m), just underneath the plough layer (0.45 - 0.70 m) or deeper than 0.70 m under the surface. Finally, mass models were established within every depth interval to be able to predict the metal mass of every selected anomaly . This methodology was successfully validated in another field where several metal objects were buried. Finally, it was applied on several arable fields at a different location within the former WW1 front zone. Fields located in the centre of the former war zone contained more than 400 metal pieces per hectare, most of them just underneath the plough layer. Fields on the edge of the former war zone contained substantially less metal items per hectare. To conclude, the developed methodology can be employed to differentiate

  11. 3-D measurement of dynamic and isolated objects based on color-encoded sinusoidal fringe%颜色编码正弦条纹实现孤立动态物体3维测量

    Institute of Scientific and Technical Information of China (English)

    王慧; 白乐源; 麻珂; 张启灿

    2014-01-01

    When 3-D shapes of dynamic objects , especially with isolated area and discontinuous distribution , are measured with the traditional fringe projection and phase analysis method , it is difficult to get reliable expansion phase .A technique based on color-encoded sinusoidal fringe projection was proposed to solve the problem .The projected sinusoidal fringes were marked with two-level encoded color .After capturing the deformed fringe pattern , the fringe order was determined according to the color sequence based on the coded characteristics , and the cut-off phase was unwrapped . Finally, the 3-D shape of the dynamic object with isolate area was reconstructed .The results show the decoding method is stable and reliable .3-D shape of the spatially isolated dynamic objects can be exactly reconstructed with only one shot .%采用常规条纹投影与相位分析方法,对动态物体,尤其是空间存在孤立区域、分布不连续的动态物体进行3维面形测量时,很难得到可靠的展开相位。为了解决这一问题,提出一种用颜色编码正弦条纹光栅投影测量的新方法。该方法使用二级编码的颜色信息来标记待投影的正弦条纹,从另一角度拍摄记录带有颜色信息的变形条纹图,根据编码特征进行解码获取颜色级次来确定条纹级次,并指导截断相位的展开,重建空间孤立动态物体的3维面形数据。结果表明,该算法的编码稳定、解码方式可靠,只需要拍摄1幅图就可以较好地重建空间孤立物体的3维面形。

  12. Parametrizable cameras for 3D computational steering

    NARCIS (Netherlands)

    Mulder, J.D.; Wijk, J.J. van

    1997-01-01

    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical man

  13. Can 3D Printing change your business?

    OpenAIRE

    Unver, Ertu

    2013-01-01

    This presentation is given to businesses / companies with an interest in 3D Printing and Additive Manufacturing in West Yorkshire, UK Organised by the Calderdale and Kirklees Manufacturing Alliance. http://www.ckma.co.uk/ by Dr Ertu Unver Senior Lecturer / Product Design / MA 3D Digital Design / University of Huddersfield Location : 3M BIC, Date : 11th April, Time : 5.30 – 8pm Additive manufacturing or 3D printing is a process of making a three-dimensional (3D) objects from...

  14. 3D Printed Robotic Hand

    Science.gov (United States)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  15. Forensic 3D Scene Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  16. Labeling 3D scenes for Personal Assistant Robots

    CERN Document Server

    Koppula, Hema Swetha; Joachims, Thorsten; Saxena, Ashutosh

    2011-01-01

    Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. We use this data to build 3D point clouds of a full scene. In this paper, we address the task of labeling objects in this 3D point cloud of a complete indoor scene such as an office. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. The model admits efficient approximate inference, and we train it using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views, having 2495 segments labeled with 27 object classes), we get a performance of 84.06% in labeling 17 object classes for offices, and 73.38% in labeling 17 object classe...

  17. FROM 3D MODEL DATA TO SEMANTICS

    Directory of Open Access Journals (Sweden)

    My Abdellah Kassimi

    2012-01-01

    Full Text Available The semantic-based 3D models retrieval systems have become necessary since the increase of 3D modelsdatabases. In this paper, we propose a new method for the mapping problem between 3D model data andsemantic data involved in semantic based retrieval for 3D models given by polygonal meshes. First, wefocused on extracting invariant descriptors from the 3D models and analyzing them to efficient semanticannotation and to improve the retrieval accuracy. Selected shape descriptors provide a set of termscommonly used to describe visually a set of objects using linguistic terms and are used as semanticconcept to label 3D model. Second, spatial relationship representing directional, topological anddistance relationships are used to derive other high-level semantic features and to avoid the problem ofautomatic 3D model annotation. Based on the resulting semantic annotation and spatial concepts, anontology for 3D model retrieval is constructed and other concepts can be inferred. This ontology is usedto find similar 3D models for a given query model. We adopted the query by semantic example approach,in which the annotation is performed mostly automatically. The proposed method is implemented in our3D search engine (SB3DMR, tested using the Princeton Shape Benchmark Database.

  18. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... viewer has a problem with focusing or depth perception. Also, the techniques used to create the 3- ... or other conditions that persistently inhibit focusing, depth perception or normal 3-D vision, would have difficulty ...

  19. E3D R-Tree: An Index Structure for Indexing the Histories in Moving Object Database%E3D R-Tree:一种处理移动对象数据库历史查询的索引结构

    Institute of Scientific and Technical Information of China (English)

    张文杰; 李建中; 张炜

    2005-01-01

    历史查询是移动对象数据库管理的一个重要方面.为提高历史查询效率,在3D R-Tree基础上实现了优化的索引结构E3D R-Tree.在E3D R-Tree中,结合移动对象数据特征引入空白区域作为新的插入代价参数,同时,在插入算法中利用最小代价优先搜索算法确定全局最优插入路径,并给出算法正确性证明.实验结果表明,E3D R-Tree查询效率高于3D R-Tree.

  20. Monocular 3D see-through head-mounted display via complex amplitude modulation.

    Science.gov (United States)

    Gao, Qiankun; Liu, Juan; Han, Jian; Li, Xin

    2016-07-25

    The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display. PMID:27464184

  1. Density-Based 3D Shape Descriptors

    Directory of Open Access Journals (Sweden)

    Schmitt Francis

    2007-01-01

    Full Text Available We propose a novel probabilistic framework for the extraction of density-based 3D shape descriptors using kernel density estimation. Our descriptors are derived from the probability density functions (pdf of local surface features characterizing the 3D object geometry. Assuming that the shape of the 3D object is represented as a mesh consisting of triangles with arbitrary size and shape, we provide efficient means to approximate the moments of geometric features on a triangle basis. Our framework produces a number of 3D shape descriptors that prove to be quite discriminative in retrieval applications. We test our descriptors and compare them with several other histogram-based methods on two 3D model databases, Princeton Shape Benchmark and Sculpteur, which are fundamentally different in semantic content and mesh quality. Experimental results show that our methodology not only improves the performance of existing descriptors, but also provides a rigorous framework to advance and to test new ones.

  2. Massive 3D Supergravity

    CERN Document Server

    Andringa, Roel; de Roo, Mees; Hohm, Olaf; Sezgin, Ergin; Townsend, Paul K

    2009-01-01

    We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered `massive 3D gravity'. Another is a `new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet.

  3. Massive 3D supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Andringa, Roel; Bergshoeff, Eric A; De Roo, Mees; Hohm, Olaf [Centre for Theoretical Physics, University of Groningen, Nijenborgh 4, 9747 AG Groningen (Netherlands); Sezgin, Ergin [George and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Townsend, Paul K, E-mail: E.A.Bergshoeff@rug.n, E-mail: O.Hohm@rug.n, E-mail: sezgin@tamu.ed, E-mail: P.K.Townsend@damtp.cam.ac.u [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)

    2010-01-21

    We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered 'massive 3D gravity'. Another is a 'new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet.

  4. 3D Digital Modelling

    DEFF Research Database (Denmark)

    Hundebøl, Jesper

    ABSTRACT: Lack of productivity in construction is a well known issue. Despite the fact that causes hereof are multiple, the introduction of information technology is a frequently observed response to almost any challenge. ICT in construction is a thoroughly researched matter, however, the current...... important to appreciate the analysis. Before turning to the presentation of preliminary findings and a discussion of 3D digital modelling, it begins, however, with an outline of industry specific ICT strategic issues. Paper type. Multi-site field study...

  5. Influence of hand position on the near-effect in 3D attention

    OpenAIRE

    Pollux, Petra; Bourke, Patrick

    2008-01-01

    Voluntary reorienting of attention in real depth situations is characterized by an attentional bias to locations near the viewer once attention is deployed to a spatially cued object in depth. Previously this effect (initially referred to as the ‘near-effect’) was attributed to access of a 3D viewer-centred spatial representation for guiding attention in 3D space. The aim of this study was to investigate whether the near-bias could have been associated with the position of the response-hand, ...

  6. An Upright Orientation Detection Algorithm for 3D Man-Made Objects Based on Shape Properties%利用形状特征的三维人造物体模型正朝向识别算法

    Institute of Scientific and Technical Information of China (English)

    姜玻; 曾鸣; 刘新国

    2013-01-01

    仅从三维模型的几何信息推测模型正朝向是一项具有挑战性的工作.文中针对三维人造物体模型,提出了一种全自动的正朝向识别算法.首先分析模型本身面片朝向、模型对称性及模型三维凸包的面片朝向,找出若干组朝向三元组,使得组内3个朝向两两正交;然后将每组朝向三元组构成的标架旋转到标准坐标系,通过对模型面片的法向和面积进行统计,投票筛选出唯一标架,使得模型能够与标准坐标系对齐;最后基于静力学平衡原理、模型可见性等准则,从标准坐标系的6个候选朝向中选取正确的正朝向.在一个三维模型数据库上进行实验的结果显示,该算法可以很好地处理绝大部分模型,包括目前最好的非监督方法不能处理的模型.%To handle the challenging problem for inferring the upright orientation of 3D models only from geometry,we propose an automatic upright orientation detection algorithm for man-made objects in this paper.The proposed method takes advantages of the orientation clues from the shape properties,such as the facet orientation,symmetry and 3D convex hull facet orientation.We first use these clues to extract candidate orientations and get those orientation triplets that the orientations are pairwise orthogonal to form a frame.Next,we rotate the frame to the canonical coordinate and select the best one that aligns the model with canonical coordinate using model facet normal and area.Finally,we use various criterions on static stability and visibility to choose the correct upright orientation from the six axis-aligned candidates of canonical coordinate orientation.We have tested the proposed method on a 3D model database,and the results show that the proposed method outperforms the state-of-the-art methods.

  7. Automatic balancing of 3D models

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Schmidt, Ryan; Bærentzen, Jakob Andreas

    2014-01-01

    3D printing technologies allow for more diverse shapes than are possible with molds and the cost of making just one single object is negligible compared to traditional production methods. However, not all shapes are suitable for 3D print. One of the remaining costs is therefore human time spent......, in these cases, we will apply a rotation of the object which only deforms the shape a little near the base. No user input is required but it is possible to specify manufacturing constraints related to specific 3D print technologies. Several models have successfully been balanced and printed using both polyjet...

  8. TOWARDS: 3D INTERNET

    Directory of Open Access Journals (Sweden)

    Ms. Swapnali R. Ghadge

    2013-08-01

    Full Text Available In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot of discussions. Basically, one can look into this matter from a few different perspectives: visualization and representation of information, and creation and transportation of information, among others. All of them still constitute research challenges, as no products or services are yet available or foreseen for the near future. Nevertheless, one can try to envisage the directions that can be taken towards achieving this goal. People who take part in virtual worlds stay online longer with a heightened level of interest. To take advantage of that interest, diverse businesses and organizations have claimed an early stake in this fast-growing market. They include technology leaders such as IBM, Microsoft, and Cisco, companies such as BMW, Toyota, Circuit City, Coca Cola, and Calvin Klein, and scores of universities, including Harvard, Stanford and Penn State.

  9. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  10. 3D Gravity Inversion using Tikhonov Regularization

    Directory of Open Access Journals (Sweden)

    Toushmalani Reza

    2015-08-01

    Full Text Available Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region

  11. Multiplane 3D superresolution optical fluctuation imaging

    CERN Document Server

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  12. Medical 3D Printing for the Radiologist.

    Science.gov (United States)

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article.

  13. 3D Video Compression and Transmission

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    In this short paper we provide a brief introduction to 3D and multi-view video technologies - like three-dimensional television and free-viewpoint video - focusing on the aspects related to data compression and transmission. Geometric information represented by depth maps is introduced as well...

  14. Cubical Cohomology Ring of 3D Photographs

    CERN Document Server

    Gonzalez-Diaz, Rocio; Medrano, Belen; 10.1002/ima.20271

    2011-01-01

    Cohomology and cohomology ring of three-dimensional (3D) objects are topological invariants that characterize holes and their relations. Cohomology ring has been traditionally computed on simplicial complexes. Nevertheless, cubical complexes deal directly with the voxels in 3D images, no additional triangulation is necessary, facilitating efficient algorithms for the computation of topological invariants in the image context. In this paper, we present formulas to directly compute the cohomology ring of 3D cubical complexes without making use of any additional triangulation. Starting from a cubical complex $Q$ that represents a 3D binary-valued digital picture whose foreground has one connected component, we compute first the cohomological information on the boundary of the object, $\\partial Q$ by an incremental technique; then, using a face reduction algorithm, we compute it on the whole object; finally, applying the mentioned formulas, the cohomology ring is computed from such information.

  15. Shaping 3-D boxes

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2011-01-01

    Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data......, making them different from typical, existing box shaping techniques. The basis of the proposed techniques is a new algorithm for constructing a full box from just three of its corners. The evaluation of the new techniques compares their precision and completion times in a 9 degree-of-freedom (Do......F) docking experiment against an existing technique, which requires the user to perform the rotation and scaling of the box explicitly. The precision of the users' box construction is evaluated by a novel error metric measuring the difference between two boxes. The results of the experiment strongly indicate...

  16. Innovations in 3D printing: a 3D overview from optics to organs.

    Science.gov (United States)

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints.

  17. RAG-3D: a search tool for RNA 3D substructures.

    Science.gov (United States)

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-10-30

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D-a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool-designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  18. Direct 3D Painting with a Metaball-Based Paintbrush

    Institute of Scientific and Technical Information of China (English)

    WAN Huagen; JIN Xiaogang; BAO Hujun

    2000-01-01

    This paper presents a direct 3D painting algorithm for polygonal models in 3D object-space with a metaball-based paintbrush in virtual environment.The user is allowed to directly manipulate the parameters used to shade the surface of the 3D shape by applying the pigment to its surface with direct 3D manipulation through a 3D flying mouse.

  19. 3D Visualization Development of SIUE Campus

    Science.gov (United States)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  20. Technical illustration based on 3D CSG models

    Institute of Scientific and Technical Information of China (English)

    GENG Wei-dong; DING Lei; YU Hong-feng; PAN Yun-he

    2005-01-01

    This paper presents an automatic non-photorealistic rendering approach to generating technical illustration from 3D models. It first decomposes the 3D object into a set of CSG primitives, and then performs the hidden surface removal based on the prioritized list, in which the rendition order of CSG primitives is sorted out by depth. Then, each primitive is illustrated by the pre-defined empirical lighting model, and the system mimics the stroke-drawing by user-specified style. In order to artistically and flexibly modulate the illumination, the empirical lighting model is defined by three major components: parameters of multi-level lighting intensities, parametric spatial occupations for each lighting level, and an interpolation method to calculate the lighting units into the spatial occupation of CSG primitives, instead of"pixel-by-pixel" painting. This region-by-region shading facilitates the simulation of illustration styles.

  1. Thin slice three dimentional (3D reconstruction versus CT 3D reconstruction of human breast cancer

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2013-01-01

    Full Text Available Background & objectives: With improvement in the early diagnosis of breast cancer, breast conserving therapy (BCT is being increasingly used. Precise preoperative evaluation of the incision margin is, therefore, very important. Utilizing three dimentional (3D images in a preoperative evaluation for breast conserving surgery has considerable significance, but the currently 3D CT scan reconstruction commonly used has problems in accurately displaying breast cancer. Thin slice 3D reconstruction is also widely used now to delineate organs and tissues of breast cancers. This study was aimed to compare 3D CT with thin slice 3D reconstruction in breast cancer patients to find a better technique for accurate evaluation of breast cancer. Methods: A total of 16-slice spiral CT scans and 3D reconstructions were performed on 15 breast cancer patients. All patients had been treated with modified radical mastectomy; 2D and 3D images of breast and tumours were obtained. The specimens were fixed and sliced at 2 mm thickness to obtain serial thin slice images, and reconstructed using 3D DOCTOR software to gain 3D images. Results: Compared with 2D CT images, thin slice images showed more clearly the morphological characteristics of tumour, breast tissues and the margins of different tissues in each slice. After 3D reconstruction, the tumour shapes obtained by the two reconstruction methods were basically the same, but the thin slice 3D reconstruction showed the tumour margins more clearly. Interpretation & conclusions: Compared with 3D CT reconstruction, thin slice 3D reconstruction of breast tumour gave clearer images, which could provide guidance for the observation and application of CT 3D reconstructed images and contribute to the accurate evaluation of tumours using CT imaging technology.

  2. Martian terrain - 3D

    Science.gov (United States)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  3. 3D monitor

    OpenAIRE

    Szkandera, Jan

    2009-01-01

    Tato bakalářská práce se zabývá návrhem a realizací systému, který umožní obraz scény zobrazovaný na ploše vnímat prostorově. Prostorové vnímání 2D obrazové informace je umožněno jednak stereopromítáním a jednak tím, že se obraz mění v závislosti na poloze pozorovatele. Tato práce se zabývá hlavně druhým z těchto problémů. This Bachelor's thesis goal is to design and realize system, which allows user to perceive 2D visual information as three-dimensional. 3D visual preception of 2D image i...

  4. Matching Feature Points in 3D World

    OpenAIRE

    Avdiu, Blerta

    2012-01-01

    This thesis work deals with the most actual topic in Computer Vision field which is scene understanding and this using matching of 3D feature point images. The objective is to make use of Saab’s latest breakthrough in extraction of 3D feature points, to identify the best alignment of at least two 3D feature point images. The thesis gives a theoretical overview of the latest algorithms used for feature detection, description and matching. The work continues with a brief description of the simu...

  5. Computer Modelling of 3D Geological Surface

    CERN Document Server

    Kodge, B G

    2011-01-01

    The geological surveying presently uses methods and tools for the computer modeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.

  6. Tvorba 3D modelů

    OpenAIRE

    Musálek, Martin

    2014-01-01

    Práce řeší 3D rekonstrukci objektu pomocí metody nasvícení vzorem. Projektor nasvěcuje měřený objekt definovaným vzorem a dvojice kamer z něj snímá body. Podstavec s objektem se otáčí, a během více měření je objekt sejmut z více úhlů. Body jsou identifikovány z naměřených snímků, transformovány na 3D pomocí stereovidění, spojeny do 3D modelu a zobrazeny. Thesis solves 3D reconstruction of an object by method of lighting by pattern. A projector lights the measured object by defined pattern ...

  7. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Morales, Jose A.

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  8. Study on basic problems in real-time 3D holographic display

    Science.gov (United States)

    Jia, Jia; Liu, Juan; Wang, Yongtian; Pan, Yijie; Li, Xin

    2013-05-01

    In recent years, real-time three-dimensional (3D) holographic display has attracted more and more attentions. Since a holographic display can entirely reconstruct the wavefront of an actual 3D scene, it can provide all the depth cues for human eye's observation and perception, and it is believed to be the most promising technology for future 3D display. However, there are several unsolved basic problems for realizing large-size real-time 3D holographic display with a wide field of view. For examples, commercial pixelated spatial light modulators (SLM) always lead to zero-order intensity distortion; 3D holographic display needs a huge number of sampling points for the actual objects or scenes, resulting in enormous computational time; The size and the viewing zone of the reconstructed 3D optical image are limited by the space bandwidth product of the SLM; Noise from the coherent light source as well as from the system severely degrades the quality of the 3D image; and so on. Our work is focused on these basic problems, and some initial results are presented, including a technique derived theoretically and verified experimentally to eliminate the zero-order beam caused by a pixelated phase-only SLM; a method to enlarge the reconstructed 3D image and shorten the reconstruction distance using a concave reflecting mirror; and several algorithms to speed up the calculation of computer generated holograms (CGH) for the display.

  9. Improving depth maps with limited user input

    Science.gov (United States)

    Vandewalle, Patrick; Klein Gunnewiek, René; Varekamp, Chris

    2010-02-01

    A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance. In this paper, we discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, we will elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool.

  10. 3D game environments create professional 3D game worlds

    CERN Document Server

    Ahearn, Luke

    2008-01-01

    The ultimate resource to help you create triple-A quality art for a variety of game worlds; 3D Game Environments offers detailed tutorials on creating 3D models, applying 2D art to 3D models, and clear concise advice on issues of efficiency and optimization for a 3D game engine. Using Photoshop and 3ds Max as his primary tools, Luke Ahearn explains how to create realistic textures from photo source and uses a variety of techniques to portray dynamic and believable game worlds.From a modern city to a steamy jungle, learn about the planning and technological considerations for 3D modelin

  11. X3D: Extensible 3D Graphics Standard

    OpenAIRE

    Daly, Leonard; Brutzman, Don

    2007-01-01

    The article of record as published may be located at http://dx.doi.org/10.1109/MSP.2007.905889 Extensible 3D (X3D) is the open standard for Web-delivered three-dimensional (3D) graphics. It specifies a declarative geometry definition language, a run-time engine, and an application program interface (API) that provide an interactive, animated, real-time environment for 3D graphics. The X3D specification documents are freely available, the standard can be used without paying any royalties,...

  12. 3D printing: technology and processing

    OpenAIRE

    Kurinov, Ilya

    2016-01-01

    The objective of the research was to improve the process of 3D printing on the laboratory machine. In the study processes of designing, printing and post-print-ing treatment were improved. The study was commissioned by Mikko Ruotsalainen, head of the laboratory. The data was collected during the test work. All the basic information about 3D printing was taken from the Internet or library. As the results of the project higher model accuracy, solutions for post-printing treatment, printin...

  13. The Idaho Virtualization Laboratory 3D Pipeline

    Directory of Open Access Journals (Sweden)

    Nicholas A. Holmer

    2014-05-01

    Full Text Available Three dimensional (3D virtualization and visualization is an important component of industry, art, museum curation and cultural heritage, yet the step by step process of 3D virtualization has been little discussed. Here we review the Idaho Virtualization Laboratory’s (IVL process of virtualizing a cultural heritage item (artifact from start to finish. Each step is thoroughly explained and illustrated including how the object and its metadata are digitally preserved and ultimately distributed to the world.

  14. 3D printed diffractive terahertz lenses.

    Science.gov (United States)

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335

  15. 3D Printing an Octohedron

    OpenAIRE

    Aboufadel, Edward F.

    2014-01-01

    The purpose of this short paper is to describe a project to manufacture a regular octohedron on a 3D printer. We assume that the reader is familiar with the basics of 3D printing. In the project, we use fundamental ideas to calculate the vertices and faces of an octohedron. Then, we utilize the OPENSCAD program to create a virtual 3D model and an STereoLithography (.stl) file that can be used by a 3D printer.

  16. Salient Local 3D Features for 3D Shape Retrieval

    CERN Document Server

    Godil, Afzal

    2011-01-01

    In this paper we describe a new formulation for the 3D salient local features based on the voxel grid inspired by the Scale Invariant Feature Transform (SIFT). We use it to identify the salient keypoints (invariant points) on a 3D voxelized model and calculate invariant 3D local feature descriptors at these keypoints. We then use the bag of words approach on the 3D local features to represent the 3D models for shape retrieval. The advantages of the method are that it can be applied to rigid as well as to articulated and deformable 3D models. Finally, this approach is applied for 3D Shape Retrieval on the McGill articulated shape benchmark and then the retrieval results are presented and compared to other methods.

  17. VIRTUAL 3D CITY MODELING: TECHNIQUES AND APPLICATIONS

    OpenAIRE

    S. P. Singh; K. Jain; V. R. Mandla

    2013-01-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach ...

  18. X3d2pov. Traductor of X3D to POV-Ray

    Directory of Open Access Journals (Sweden)

    Andrea Castellanos Mendoza

    2011-01-01

    Full Text Available High-quality and low-quality interactive graphics represent two different approaches to computer graphics’ 3D object representation. The former is mainly used to produce high computational cost movie animation. The latter is used for producing interactive scenes as part of virtual reality environments. Many file format specifications have appeared to satisfy underlying model needs; POV-ray (persistence of vision is an open source specification for rendering photorealistic images with the ray tracer algorithm and X3D (extendable 3D as the VRML successor standard for producing web virtual-reality environments written in XML. X3D2POV has been introduced to render high-quality images from an X3D scene specification; it is a grammar translator tool from X3D code to POV-ray code.

  19. 3D modelling and recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan; Alboul, Lyuba; Brink, Willie

    2006-01-01

    3D face recognition is an open field. In this paper we present a method for 3D facial recognition based on Principal Components Analysis. The method uses a relatively large number of facial measurements and ratios and yields reliable recognition. We also highlight our approach to sensor development for fast 3D model acquisition and automatic facial feature extraction.

  20. New approach to the perception of 3D shape based on veridicality, complexity, symmetry and volume.

    Science.gov (United States)

    Pizlo, Zygmunt; Sawada, Tadamasa; Li, Yunfeng; Kropatsch, Walter G; Steinman, Robert M

    2010-01-01

    This paper reviews recent progress towards understanding 3D shape perception made possible by appreciating the significant role that veridicality and complexity play in the natural visual environment. The ability to see objects as they really are "out there" is derived from the complexity inherent in the 3D object's shape. The importance of both veridicality and complexity was ignored in most prior research. Appreciating their importance made it possible to devise a computational model that recovers the 3D shape of an object from only one of its 2D images. This model uses a simplicity principle consisting of only four a priori constraints representing properties of 3D shapes, primarily their symmetry and volume. The model recovers 3D shapes from a single 2D image as well, and sometimes even better, than a human being. In the rare recoveries in which errors are observed, the errors made by the model and human subjects are very similar. The model makes no use of depth, surfaces or learning. Recent elaborations of this model include: (i) the recovery of the shapes of natural objects, including human and animal bodies with limbs in varying positions (ii) providing the model with two input images that allowed it to achieve virtually perfect shape constancy from almost all viewing directions. The review concludes with a comparison of some of the highlights of our novel, successful approach to the recovery of 3D shape from a 2D image with prior, less successful approaches. PMID:19800910

  1. 3D Reconstruction by Kinect Sensor:A Brief Review

    Institute of Scientific and Technical Information of China (English)

    LI Shi-rui; TAO Ke-lu; WANG Si-yuan; LI Hai-yang; CAO Wei-guo; LI Hua

    2014-01-01

    While Kinect was originally designed as a motion sensing input device of the gaming console Microsoft Xbox 360 for gaming purposes, it’s easy-to-use, low-cost, reliability, speed of the depth measurement and relatively high quality of depth measurement make it can be used for 3D reconstruction. It could make 3D scanning technology more accessible to everyday users and turn 3D reconstruction models into much widely used asset for many applications. In this paper, we focus on Kinect 3D reconstruction.

  2. 3D Facial Depth Map Recognition in Different Poses with Surface Contour Feature%基于曲面等高线特征的不同姿态三维人脸深度图识别

    Institute of Scientific and Technical Information of China (English)

    叶长明; 蒋建国; 詹曙; ANDO Shigeru

    2013-01-01

    Three-dimensional face recognition has drown more and more attention,for it overcomes the shortcomings of two-dimensional face recognition technology that two-dimensional face recognition is susceptible to the influence of light,expression changes and pose variations.A face recognition method,Fourier descriptor and contour (FDAC),is proposed in this paper.It is based on the depth maps by the three-dimensional facial imaging system in different poses.Firstly,depth maps are corrected under the guidance of the differential geometry theory,and the human face features are described by the contours.Secondly,Fourier descriptor is employed to extract the facial features.Finally,these extracted features are used in the face recognition process.Experimental results show that FDAC has good recognition accuracy and it performs better in time cost compared with Eigenface method.%三维人脸识别因能克服二维人脸识别易受光照,姿态和表情等因素影响的缺点,从而日益受到关注和重视.文中针对三维人脸实时成像系统所获得的不同姿态下的三维人脸深度图,提出一种人脸识别方法(FDAC).首先利用微分几何相关理论来指导三维深度人脸深度图的校正,再根据曲面等高线来描述人脸的面部特征并使用傅里叶描绘子实现特征提取,最后利用提取的等高线特征进行人脸分类识别.实验结果表明,FDAC方法对于不同姿态下的三维人脸图像有较好的识别率,并且在时间开销方面优于常规的特征脸识别方法.

  3. 3D Human cartilage surface characterization by optical coherence tomography

    Science.gov (United States)

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D

  4. Measuring the thickness of protective coatings on historic metal objects using nanosecond and femtosecond laser induced breakdown spectroscopy depth profiling

    Science.gov (United States)

    Pouli, P.; Melessanaki, K.; Giakoumaki, A.; Argyropoulos, V.; Anglos, D.

    2005-08-01

    Depth profile analysis by means of laser induced breakdown spectroscopy (LIBS) was investigated with respect to its potential to measure the thickness of different types of thin organic films used as protective coatings on historical and archaeological metal objects. For the materials examined, acrylic varnish and microcrystalline wax, the output from a nanosecond ArF excimer laser at 193 nm was found appropriate for performing a reliable profiling of the coating films leading to accurate determination of the coating thickness on the basis of the number of laser pulses required to penetrate the coating and on the ablation etch rate of the corresponding coating material under the same irradiation conditions. Nanosecond pulses at 248 nm proved inadequate to profile the coatings because of their weak absorption at the laser wavelength. In contrast, femtosecond irradiation at 248 nm yielded well-resolved profiles as a result of efficient ablation achieved through the increased non-linear absorption induced by the high power density of the ultrashort pulses.

  5. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Directory of Open Access Journals (Sweden)

    Tsap Leonid V

    2006-01-01

    Full Text Available The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  6. Constraints on Moho Depth and Crustal Thickness in the Liguro-Provençal Basin from a 3d Gravity Inversion : Geodynamic Implications Contraintes sur la profondeur du moho et l'épaisseur crustale dans le bassin liguro-provençal à partir de l'inversion 3D de données gravimétriques : implications géodynamiques

    Directory of Open Access Journals (Sweden)

    Gaulier J. M.

    2006-12-01

    Full Text Available 3D gravity modelling is combined with seismic refraction and reflection data to constrain a new Moho depth map in the Liguro-Provençal Basin (Western Mediterranean Sea. At seismically controlled points, the misfit between the gravimetric solution and the seismic data is about 2 km for a range of Moho depth between 12 km (deep basin and 30 km (mainlands. The oceanic crust thickness in the deep basin (5 km is smaller than the average oceanic crust thickness reported in open oceans (7 km, pointing to a potential mantle temperature 30°C to 50°C below normal and/or very slow oceanic spreading rate. Oceanic crust thickness is decreasing towards the Ligurian Sea and towards the continent-ocean boundary to values as small as 2 km. Poor magma supply is a result of low potential mantle temperature at depth, lateral thermal conduction towards unextended continental margin, and decrease of the oceanic spreading rate close to the pole of opening in the Ligurian Sea. Re-examination of magnetic data (paleomagnetic data and magnetic lineations indicates that opening of the Liguro-Provençal Basin may have ceased as late as Late Burdigalian (16. 5 Ma or even later. The absence of significant time gap between cessation of opening in the Liguro-Provençal Basin and rifting of the Tyrrhenian domain favours a continuous extension mechanism since Upper Oligocene driven by the African trench retreat. Ce rapport présente un travail commun avec le Laboratoire de géodynamique de l'École normale supérieure (ENS. Ce travail doit être resitué dans son contexte : l'étude régionale du golfe du Lion a été possible dans le cadre du projet européen Integrated Basin Studies. Le développement du code d'inversion 3D avait fait l'objet de conventions avec l'ENS pendant les années précédentes. La mise en Suvre d'une telle inversion est désormais possible à l'IFP. Il n'y a pas d'interface pour ce calculateur. L'aide des collègues de l'ENS est souhaitable pour la

  7. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Science.gov (United States)

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  8. 立体视频对象分割及其三维重建算法研究%Research for stereo video object segmentation and 3 D reconstruction

    Institute of Scientific and Technical Information of China (English)

    高韬

    2011-01-01

    为更加有效分析立体视频对象,提出了一种基于离散冗余小波变换的立体视频对象分割算法.采用离散冗余小波变换提取特征点结合DT网格技术的视差估计方法,获得了可靠的视差场,再利用视差信息对立体视频中静止对象进行分割.对于立体视频序列中的运动对象,采用离散冗余小波提取运动区域的方法进行分割.实验结果表明,本算法对有重叠的多视频对象具有较好的分割效果,可同时分割静止物体和运动物体,具有较好的精确性和鲁棒性.对于分割出的立体视频对象,结合深度信息对其进行三维重建,得到较好的三维效果.%For more effective analysis of stereo video object, this paper proposed a discrete redundant wavelet transforms based stereo video object segmentation method. First, the method obtained the disparity map by discrete redundant wavelet transforms and used the disparity map to do video object segmentation. For the moving objects in the stereo video sequence,used the discrete redundant wavelet transforms to extract the motion region. Experimental results show that the method can not only segment the overlapping objects, but also can segment the stationary objects and moving objects at the same time with better accuracy and robustness. According to depth map, represented the visible scene surface with a para-metrically deformable,spatially adaptive, wireframe model.

  9. 3DSEM: A 3D microscopy dataset.

    Science.gov (United States)

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Holz, Jessica D; Owen, Heather A; Yu, Zeyun

    2016-03-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. PMID:26779561

  10. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples.

  11. 3D-printed bioanalytical devices

    Science.gov (United States)

    Bishop, Gregory W.; Satterwhite-Warden, Jennifer E.; Kadimisetty, Karteek; Rusling, James F.

    2016-07-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  12. Real time 3D scanner: investigations and results

    Science.gov (United States)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  13. Deep Learning Representation using Autoencoder for 3D Shape Retrieval

    OpenAIRE

    Zhu, Zhuotun; Wang, Xinggang; Bai, Song; Yao, Cong; Bai, Xiang

    2014-01-01

    We study the problem of how to build a deep learning representation for 3D shape. Deep learning has shown to be very effective in variety of visual applications, such as image classification and object detection. However, it has not been successfully applied to 3D shape recognition. This is because 3D shape has complex structure in 3D space and there are limited number of 3D shapes for feature learning. To address these problems, we project 3D shapes into 2D space and use autoencoder for feat...

  14. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Science.gov (United States)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  15. UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING

    Directory of Open Access Journals (Sweden)

    I. Sarakinou

    2016-06-01

    Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  16. Charge collection characterization of a 3D silicon radiation detector by using 3D simulations

    CERN Document Server

    Kalliopuska, J; Orava, R

    2007-01-01

    In 3D detectors, the electrodes are processed within the bulk of the sensor material. Therefore, the signal charge is collected independently of the wafer thickness and the collection process is faster due to shorter distances between the charge collection electrodes as compared to a planar detector structure. In this paper, 3D simulations are used to assess the performance of a 3D detector structure in terms of charge sharing, efficiency and speed of charge collection, surface charge, location of the primary interaction and the bias voltage. The measured current pulse is proposed to be delayed due to the resistance–capacitance (RC) product induced by the variation of the serial resistance of the pixel electrode depending on the depth of the primary interaction. Extensive simulations are carried out to characterize the 3D detector structures and to verify the proposed explanation for the delay of the current pulse. A method for testing the hypothesis experimentally is suggested.

  17. 3D-skannaukseen perehtyminen

    OpenAIRE

    Santaluoto, Olli

    2012-01-01

    Tässä insinöörityössä tarkastellaan erilaisia 3D-skannaustekniikoita ja menetelmiä. Työssä myös kerrotaan esimerkkien avulla eri 3D-skannaustekniikoiden käyttökohteista. 3D-skannaus on Suomessa vielä melko harvinaista, siksi eri tekniikat ja käyttömahdollisuudet ovat monille tuntemattomia. 3D-skanneri on laite, jolla tutkitaan reaalimaailman esineitä tai ympäristöä keräämällä dataa kohteen muodoista. 3D-skannerit ovat hyvin paljon vastaavia tavallisen kameran kanssa. Kuten kameroilla, 3D...

  18. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    Science.gov (United States)

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  19. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    Directory of Open Access Journals (Sweden)

    Cristina Losada

    2010-04-01

    Full Text Available This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space. The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  20. 3D Printing Functional Nanocomposites

    OpenAIRE

    Leong, Yew Juan

    2016-01-01

    3D printing presents the ability of rapid prototyping and rapid manufacturing. Techniques such as stereolithography (SLA) and fused deposition molding (FDM) have been developed and utilized since the inception of 3D printing. In such techniques, polymers represent the most commonly used material for 3D printing due to material properties such as thermo plasticity as well as its ability to be polymerized from monomers. Polymer nanocomposites are polymers with nanomaterials composited into the ...

  1. Evaluating methods for controlling depth perception in stereoscopic cinematography

    Science.gov (United States)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  2. Surface 3-D reflection seismics - implementation at the Olkiluoto site

    International Nuclear Information System (INIS)

    Posiva Oy takes care of the final disposal of spent nuclear fuel in Finland. In year 2001 Olkiluoto was selected for the site of final disposal. Construction of the underground research facility, ONKALO, is going on at the Olkiluoto site. The aim of this work was to study the possibilities for surface 3-D seismics and to review experiences for design before field work. The physical parameters and geometric properties of the site, as well as efficient survey layout and source arrangements, were considered in this work. Reflection seismics is most used geophysical investigation method in oil exploration and earth studies in sedimentary environment. Recently method has also been applied in crystalline bedrock for ore exploration and nuclear waste disposal site investigations. The advantage of the method is high accuracy combined with large depth of investigation. The principles of seismic 2-D and 3-D soundings are well known and advanced. 3-D sounding is a straightforward expansion of 2-D line based surveying. In investigation of crystalline bedrock, the high frequency wave sources and receivers, their right use in measurements and careful processing procedure (refraction static corrections in particular) are important. Using the site parameters in 2-D numerical modeling, two cases of faulted thin layer at depths of 200, 400 and 600 meters were studied. The first case was a layer with vertical dislocation (a ramp) and the other a layer having limited width of dislocated part. Central frequencies were 100, 200, 400 and 700 Hz. Results indicate that 10 - 20 m dislocation is recognizable, but for depths greater than 600 m, over 20 meters is required. Width of the dislocated part will affect the detectability of vertical displacement. At depths of 200 m and 400 m 10 - 50 m wide parts appear as point-like scatterers, wider areas have more continuity. Dislocations larger than 20 m can be seen. From depth of 600 m over 100 m wide parts are discernible, narrower are visible

  3. 3D Elevation Program—Virtual USA in 3D

    Science.gov (United States)

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  4. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  5. 三维动画设计在大学生实践创新训练项目中的应用心得--以大孚膜废水深度处理工艺三维动画演示项目为例%Application Experience of 3D Animation Design in Colleague Students' Practice Innovation Training Project:Taking the 3D Animation Demonstration Design Project of“Dafu”Wastage Aster In-depth Purification Process for Example

    Institute of Scientific and Technical Information of China (English)

    杨恒; 陈仲先

    2014-01-01

    大学生创新训练课题,为培养大学生的创新能力,全面提高学生综合素质提供了平台。本文结合自己作为三维动画专业教师指导大学生创新课题的实践体会,以大孚膜废水深度处理工艺三维动画演示项目为例,全面展示项目的研究目标、研究过程、研究成果、研究心得以及项目创新点与特色,希望能为今后大学生创新项目的不断完善和顺利实施提供有益的帮助。%Training Project of colleague student creatively practice has provided a platform where we can cultivate colleague students' creation ability and fully improve overall quality of student. This article took advantage of practical experience from a professional teacher who had guide and enlighten student on subject of creatively practice. By taking 3D animation demonstration design project of "Dafu"wastage aster in-depth purification process for example. We can illustrate the study target, procedure, achievement, experience and project creative achievement and characteristic, our intention is to facilitate further perfection and successful implementation on college students' creatively practice.

  6. Interactive 3D multimedia content

    CERN Document Server

    Cellary, Wojciech

    2012-01-01

    The book describes recent research results in the areas of modelling, creation, management and presentation of interactive 3D multimedia content. The book describes the current state of the art in the field and identifies the most important research and design issues. Consecutive chapters address these issues. These are: database modelling of 3D content, security in 3D environments, describing interactivity of content, searching content, visualization of search results, modelling mixed reality content, and efficient creation of interactive 3D content. Each chapter is illustrated with example a

  7. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  8. 3-D printers for libraries

    CERN Document Server

    Griffey, Jason

    2014-01-01

    As the maker movement continues to grow and 3-D printers become more affordable, an expanding group of hobbyists is keen to explore this new technology. In the time-honored tradition of introducing new technologies, many libraries are considering purchasing a 3-D printer. Jason Griffey, an early enthusiast of 3-D printing, has researched the marketplace and seen several systems first hand at the Consumer Electronics Show. In this report he introduces readers to the 3-D printing marketplace, covering such topics asHow fused deposition modeling (FDM) printing workBasic terminology such as build

  9. 3D for Graphic Designers

    CERN Document Server

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani

  10. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen;

    2005-01-01

    to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  11. Superplot3d: an open source GUI tool for 3d trajectory visualisation and elementary processing.

    Science.gov (United States)

    Whitehorn, Luke J; Hawkes, Frances M; Dublon, Ian An

    2013-09-30

    When acquiring simple three-dimensional (3d) trajectory data it is common to accumulate large coordinate data sets. In order to examine integrity and consistency of object tracking, it is often necessary to rapidly visualise these data. Ordinarily, to achieve this the user must either execute 3d plotting functions in a numerical computing environment or manually inspect data in two dimensions, plotting each individual axis.Superplot3d is an open source MATLAB script which takes tab delineated Cartesian data points in the form x, y, z and time and generates an instant visualization of the object's trajectory in free-rotational three dimensions. Whole trajectories may be instantly presented, allowing for rapid inspection. Executable from the MATLAB command line (or deployable as a compiled standalone application) superplot3d also provides simple GUI controls to obtain rudimentary trajectory information, allow specific visualization of trajectory sections and perform elementary processing.Superplot3d thus provides a framework for non-programmers and programmers alike, to recreate recently acquired 3d object trajectories in rotatable 3d space. It is intended, via the use of a preference driven menu to be flexible and work with output from multiple tracking software systems. Source code and accompanying GUIDE .fig files are provided for deployment and further development.

  12. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... be concerned that 3-D movies, TV or video games will damage the eyes or visual system. Some people complain of headaches or motion sickness when viewing 3-D, which may indicate that the viewer has a problem with focusing or depth ... Thank you Your feedback has been sent.

  13. Are 3-D Movies Bad for Your Eyes?

    Medline Plus

    Full Text Available ... be concerned that 3-D movies, TV or video games will damage the eyes or visual system. Some people complain of headaches or motion sickness when viewing 3-D, which may indicate that the viewer has a problem with focusing or depth perception. Also, the techniques ...

  14. Binary pattern analysis for 3D facial action unit detection

    NARCIS (Netherlands)

    Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied, a

  15. PRODUCTION WITH 3D PRINTERS IN TEXTILES [REVIEW

    OpenAIRE

    KESKIN Reyhan; GOCEK Ikilem

    2015-01-01

    3D printers are gaining more attention, finding different applications and 3D printing is being regarded as a ‘revolution’ of the 2010s for production. 3D printing is a production method that produces 3-dimensional objects by combining very thin layers over and over to form the object using 3D scanners or via softwares either private or open source. 3D printed materials find application in a large range of fields including aerospace, automotive, medicine and material science. There are severa...

  16. 3D Scanning With a Mobile Phone and Other Methods

    OpenAIRE

    Eklund, Andreas

    2016-01-01

    The aim of this thesis was to use a mobile phone for 3D scanning using an application called 123D Catch. Other 3D scanning methods were used to compare different types of 3D scanning. Common 3D scanning methods available and their uses are presented in this work. A professional 3D scanner was used to get precise scan data on an object which was then used as reference for the lower tech methods. Scanning with a mobile phone means taking 2D photographs of an object from different angles. T...

  17. Biologically Inspired Model for Inference of 3D Shape from Texture.

    Science.gov (United States)

    Gomez, Olman; Neumann, Heiko

    2016-01-01

    A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer.

  18. Biologically Inspired Model for Inference of 3D Shape from Texture.

    Science.gov (United States)

    Gomez, Olman; Neumann, Heiko

    2016-01-01

    A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer. PMID:27649387

  19. 3D acoustic imaging applied to the Baikal neutrino telescope

    International Nuclear Information System (INIS)

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  20. Electrospun 3D Fibrous Scaffolds for Chronic Wound Repair

    Directory of Open Access Journals (Sweden)

    Huizhi Chen

    2016-04-01

    Full Text Available Chronic wounds are difficult to heal spontaneously largely due to the corrupted extracellular matrix (ECM where cell ingrowth is obstructed. Thus, the objective of this study was to develop a three-dimensional (3D biodegradable scaffold mimicking native ECM to replace the missing or dysfunctional ECM, which may be an essential strategy for wound healing. The 3D fibrous scaffolds of poly(lactic acid-co-glycolic acid (PLGA were successfully fabricated by liquid-collecting electrospinning, with 5~20 µm interconnected pores. Surface modification with the native ECM component aims at providing biological recognition for cell growth. Human dermal fibroblasts (HDFs successfully infiltrated into scaffolds at a depth of ~1400 µm after seven days of culturing, and showed significant progressive proliferation on scaffolds immobilized with collagen type I. In vivo models showed that chronic wounds treated with scaffolds had a faster healing rate. These results indicate that the 3D fibrous scaffolds may be a potential wound dressing for chronic wound repair.

  1. 3D acoustic imaging applied to the Baikal neutrino telescope

    Energy Technology Data Exchange (ETDEWEB)

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  2. Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes

    OpenAIRE

    Niclass, Cristiano; Rochas, Alexis; Besse, Pierre-André; Charbon, Edoardo

    2005-01-01

    The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 × 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can b...

  3. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    Science.gov (United States)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  4. Improvement of 3D Scanner

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The disadvantage remaining in 3D scanning system and its reasons are discussed. A new host-and-slave structure with high speed image acquisition and processing system is proposed to quicken the image processing and improve the performance of 3D scanning system.

  5. 3D Printing for Bricks

    OpenAIRE

    ECT Team, Purdue

    2015-01-01

    Building Bytes, by Brian Peters, is a project that uses desktop 3D printers to print bricks for architecture. Instead of using an expensive custom-made printer, it uses a normal standard 3D printer which is available for everyone and makes it more accessible and also easier for fabrication.

  6. Magnetic Gradient Horizontal Operator (MHGO) useful for detecting objects buried at shallow depth: cultural heritage (Villa degli Antonini, Rota Rio)

    Science.gov (United States)

    Di Filippo, Michele; Di Nezza, Maria

    2016-04-01

    Several factors were taken into consideration in order to appropriately tailor the geophysical explorations at the cultural heritage. Given the fact that each site has been neglected for a long time and in recent times used as an illegal dumping area, we thoroughly evaluated for this investigation the advantages and limitations of each specific technique, and the general conditions and history of the site. We took into account the extension of the areas to be investigated and the need for rapid data acquisition and processing. Furthermore, the survey required instrumentation with sensitivity to small background contrasts and as little as possible affected by background noise sources. In order to ascertain the existence and location of underground buried walls, a magnetic gradiometer survey (MAG) was planned. The map of the magnetic anomalies is not computed to reduction at the pole (RTP), but with a magnetic horizontal gradient operator (MHGO). The magnetic horizontal gradient operator (MHGO) generates from a grid of vertical gradient a grid of steepest slopes (i.e. the magnitude of the gradient) at any point on the surface. The MHGO is reported as a number (rise over run) rather than degrees, and the direction is opposite to that of the slope. The MHGO is zero for a horizontal surface, and approaches infinity as the slope approaches the vertical. The gradient data are especially useful for detecting objects buried at shallow depth. The map reveals some details of the anomalies of the geomagnetic field. Magnetic anomalies due to walls are more evident than in the total intensity map, whereas anomalies due to concentrations of debris are very weak. In this work we describe the results of an investigation obtained with magnetometry investigation for two archaeological sites: "Villa degli Antonini" (Genzano, Rome) and Rota Ria (Mugnano in Teverina, Viterbo). Since the main goal of the investigation was to understand the nature of magnetic anomalies with cost

  7. 3D strata objectsregistration for Malaysia within the LADM framework

    NARCIS (Netherlands)

    Zulkifli, N.A.; Abdul Rahman, A.; Van Oosterom, P.J.M.

    2011-01-01

    This paper discusses 3D objects registration and modelling for cadastral objects within the Land Administration Domain Model (LADM) framework. A conceptual model as well as the associated technical model for the 2D and 3D objects have been proposed and developed for Malaysia. For both private and pu

  8. 3D printing in dentistry.

    Science.gov (United States)

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  9. 3D printing in dentistry.

    Science.gov (United States)

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  10. PLOT3D user's manual

    Science.gov (United States)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  11. Infra Red 3D Computer Mouse

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour; Stoustrup, Jakob

    2000-01-01

    The infra red 3D mouse is a three dimensional input device to a computer. It works by determining the position of an arbitrary object (like a hand) by emitting infra red signals from a number of locations and measuring the reflected intensities. To maximize stability, robustness, and use...

  12. 3D Wire 2015 Gamification Report

    DEFF Research Database (Denmark)

    Jordi, Moréton; F, Escribano; J. L., Farias;

    This document is a general report on the implementation of gamification in 3D Wire 2015 event. As the second gamification experience in this event, we have delved deeply in the previous objectives (attracting public areas less frequented exhibition in previous years and enhance networking) and ha...

  13. PRODUCTION WITH 3D PRINTERS IN TEXTILES [REVIEW

    Directory of Open Access Journals (Sweden)

    KESKIN Reyhan

    2015-05-01

    Full Text Available 3D printers are gaining more attention, finding different applications and 3D printing is being regarded as a ‘revolution’ of the 2010s for production. 3D printing is a production method that produces 3-dimensional objects by combining very thin layers over and over to form the object using 3D scanners or via softwares either private or open source. 3D printed materials find application in a large range of fields including aerospace, automotive, medicine and material science. There are several 3D printing methods such as fused deposition modeling (FDM, stereolithographic apparatus (SLA, selective laser sintering (SLS, inkjet 3D printing and laminated object manufacturing (LOM. 3D printing process involves three steps: production of the 3D model file, conversion of the 3D model file into G-code and printing the object. 3D printing finds a large variety of applications in many fields; however, textile applications of 3D printing remain rare. There are several textile-like 3D printed products mostly for use in fashion design, for research purposes, for technical textile applications and for substituting traditional textiles suchas weft-knitted structures and lace patterns. 3D printed textile-like structures are not strong enough for textile applications as they tend to break easily and although they have the drape of a textile material, they still lack the flexibility of textiles. 3D printing technology has to gain improvement to produce materials that will be an equivalent for textile materials, and has to be a faster method to compete with traditional textile production methods.

  14. Automatic structural matching of 3D image data

    Science.gov (United States)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  15. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  16. Fusion of multisensor passive and active 3D imagery

    Science.gov (United States)

    Fay, David A.; Verly, Jacques G.; Braun, Michael I.; Frost, Carl E.; Racamato, Joseph P.; Waxman, Allen M.

    2001-08-01

    We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.

  17. 3-D Video Processing for 3-D TV

    Science.gov (United States)

    Sohn, Kwanghoon; Kim, Hansung; Kim, Yongtae

    One of the most desirable ways of realizing high quality information and telecommunication services has been called "The Sensation of Reality," which can be achieved by visual communication based on 3-D (Three-dimensional) images. These kinds of 3-D imaging systems have revealed potential applications in the fields of education, entertainment, medical surgery, video conferencing, etc. Especially, three-dimensional television (3-D TV) is believed to be the next generation of TV technology. Figure 13.1 shows how TV's display technologies have evolved , and Fig. 13.2 details the evolution of TV broadcasting as forecasted by the ETRI (Electronics and Telecommunications Research Institute). It is clear that 3-D TV broadcasting will be the next development in this field, and realistic broadcasting will soon follow.

  18. Scientific Objectives of Small Carry-on Impactor (SCI) and Deployable Camera 3 Digital (DCAM3-D): Observation of an Ejecta Curtain and a Crater Formed on the Surface of Ryugu by an Artificial High-Velocity Impact

    Science.gov (United States)

    Arakawa, M.; Wada, K.; Saiki, T.; Kadono, T.; Takagi, Y.; Shirai, K.; Okamoto, C.; Yano, H.; Hayakawa, M.; Nakazawa, S.; Hirata, N.; Kobayashi, M.; Michel, P.; Jutzi, M.; Imamura, H.; Ogawa, K.; Sakatani, N.; Iijima, Y.; Honda, R.; Ishibashi, K.; Hayakawa, H.; Sawada, H.

    2016-10-01

    The Small Carry-on Impactor (SCI) equipped on Hayabusa2 was developed to produce an artificial impact crater on the primitive Near-Earth Asteroid (NEA) 162173 Ryugu (Ryugu) in order to explore the asteroid subsurface material unaffected by space weathering and thermal alteration by solar radiation. An exposed fresh surface by the impactor and/or the ejecta deposit excavated from the crater will be observed by remote sensing instruments, and a subsurface fresh sample of the asteroid will be collected there. The SCI impact experiment will be observed by a Deployable CAMera 3-D (DCAM3-D) at a distance of ˜1 km from the impact point, and the time evolution of the ejecta curtain will be observed by this camera to confirm the impact point on the asteroid surface. As a result of the observation of the ejecta curtain by DCAM3-D and the crater morphology by onboard cameras, the subsurface structure and the physical properties of the constituting materials will be derived from crater scaling laws. Moreover, the SCI experiment on Ryugu gives us a precious opportunity to clarify effects of microgravity on the cratering process and to validate numerical simulations and models of the cratering process.

  19. ADT-3D Tumor Detection Assistant in 3D

    Directory of Open Access Journals (Sweden)

    Jaime Lazcano Bello

    2008-12-01

    Full Text Available The present document describes ADT-3D (Three-Dimensional Tumor Detector Assistant, a prototype application developed to assist doctors diagnose, detect and locate tumors in the brain by using CT scan. The reader may find on this document an introduction to tumor detection; ADT-3D main goals; development details; description of the product; motivation for its development; result’s study; and areas of applicability.

  20. SURVEY AND ANALYSIS OF 3D STEGANOGRAPHY

    Directory of Open Access Journals (Sweden)

    K .LAKSHMI

    2011-01-01

    Full Text Available Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, eg., images, audio, and video files. The remarkable growth in computational power, increase in current security approaches and techniques are often used together to ensures security of the secret message. Steganography’s ultimate objectives, which are capacity and invisibility, are the main factors that separate it from related techniques. In this paper we focus on 3D models of steganography and conclude with some review analysis of high capacity data hiding and low-distortion 3D models.