WorldWideScience

Sample records for 3d object depth

  1. Combining depth and color data for 3D object recognition

    Joergensen, Thomas M.; Linneberg, Christian; Andersen, Allan W.

    1997-09-01

    This paper describes the shape recognition system that has been developed within the ESPRIT project 9052 ADAS on automatic disassembly of TV-sets using a robot cell. Depth data from a chirped laser radar are fused with color data from a video camera. The sensor data is pre-processed in several ways and the obtained representation is used to train a RAM neural network (memory based reasoning approach) to detect different components within TV-sets. The shape recognizing architecture has been implemented and tested in a demonstration setup.

  2. 3D Object Recognition and Facial Identification Using Time-averaged Single-views from Time-of-flight 3D Depth-Camera

    Ding, Hui; Moutarde, Fabien; Shaiek, Ayet

    2010-01-01

    International audience We report here on feasibility evaluation experiments for 3D object recognition and person facial identification from single-view on real depth images acquired with an “off-the-shelf” 3D time-of-flight depth camera. Our methodology is the following: for each person or object, we perform 2 independent recordings, one used for learning and the other one for test purposes. For each recorded frame, a 3D-mesh is computed by simple triangulation from the filtered depth imag...

  3. View-based 3-D object retrieval

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  4. Advanced 3D Object Identification System Project

    National Aeronautics and Space Administration — Optra will build an Advanced 3D Object Identification System utilizing three or more high resolution imagers spaced around a launch platform. Data from each imager...

  5. Lifting Object Detection Datasets into 3D.

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  6. 3D-PRINTING OF BUILD OBJECTS

    SAVYTSKYI M. V.

    2016-03-01

    Full Text Available Raising of problem. Today, in all spheres of our life we can constate the permanent search for new, modern methods and technologies that meet the principles of sustainable development. New approaches need to be, on the one hand more effective in terms of conservation of exhaustible resources of our planet, have minimal impact on the environment and on the other hand to ensure a higher quality of the final product. Construction is not exception. One of the new promising technology is the technology of 3D -printing of individual structures and buildings in general. 3Dprinting - is the process of real object recreating on the model of 3D. Unlike conventional printer which prints information on a sheet of paper, 3D-printer allows you to display three-dimensional information, i.e. creates certain physical objects. Currently, 3D-printer finds its application in many areas of production: machine building elements, a variety of layouts, interior elements, various items. But due to the fact that this technology is fairly new, it requires the creation of detailed and accurate technologies, efficient equipment and materials, and development of common vocabulary and regulatory framework in this field. Research Aim. The analysis of existing methods of creating physical objects using 3D-printing and the improvement of technology and equipment for the printing of buildings and structures. Conclusion. 3D-printers building is a new generation of equipment for the construction of buildings, structures, and structural elements. A variety of building printing technics opens up wide range of opportunities in the construction industry. At this stage, printers design allows to create low-rise buildings of different configurations with different mortars. The scientific novelty of this work is to develop proposals to improve the thermal insulation properties of constructed 3D-printing objects and technological equipment. The list of key terms and notions of construction

  7. 3D TV - looking forward in depth

    Direct viewing of remote handling tasks in decommissioning, operation, inspection and repair of nuclear facilities is constrained by the need to contain the workspace and to provide adequate shielding for operators and other staff. Improvements in camera design and display technology, and an understanding of radiation tolerance and human factors, have been brought together at AEA Technology to provide a range of stereoscopic or 3D TV viewing systems. These allow operators to assess conditions accurately in a remote environment, and can be used either to observe or inspect, and to help in completing complex manipulations and tool deployment. (author)

  8. Faint object 3D spectroscopy with PMAS

    Roth, Martin M.; Becker, Thomas; Kelz, Andreas; Bohm, Petra

    2004-09-01

    PMAS is a fiber-coupled lens array type of integral field spectrograph, which was commissioned at the Calar Alto 3.5m Telescope in May 2001. The optical layout of the instrument was chosen such as to provide a large wavelength coverage, and good transmission from 0.35 to 1 μm. One of the major objectives of the PMAS development has been to perform 3D spectrophotometry, taking advantage of the contiguous array of spatial elements over the 2-dimensional field-of-view of the integral field unit. With science results obtained during the first two years of operation, we illustrate that 3D spectroscopy is an ideal tool for faint object spectrophotometry.

  9. Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection

    This work proposes a novel approach to segmenting randomly stacked objects in unstructured 3D point clouds, which are acquired by a random-speckle 3D imaging system for the purpose of automated object detection and reconstruction. An innovative algorithm is proposed; it is based on a novel concept of 3D watershed segmentation and the strategies for resolving over-segmentation and under-segmentation problems. Acquired 3D point clouds are first transformed into a corresponding orthogonally projected depth map along the optical imaging axis of the 3D sensor. A 3D watershed algorithm based on the process of distance transformation is then performed to detect the boundary, called the edge dam, between stacked objects and thereby to segment point clouds individually belonging to two stacked objects. Most importantly, an object-matching algorithm is developed to solve the over- and under-segmentation problems that may arise during the watershed segmentation. The feasibility and effectiveness of the method are confirmed experimentally. The results reveal that the proposed method is a fast and effective scheme for the detection and reconstruction of a 3D object in a random stack of such objects. In the experiments, the precision of the segmentation exceeds 95% and the recall exceeds 80%. (paper)

  10. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  11. Precise Depth Image Based Real-Time 3D Difference Detection

    Kahn, Svenja

    2014-01-01

    3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the ...

  12. PMAS - Faint Object 3D Spectrophotometry

    Roth, M. M.; Becker, T.; Kelz, A.

    2002-01-01

    will describe PMAS (Potsdam Multiaperture Spectrophotometer) which was commissioned at the Calar Alto Observatory 3.5m Telescope on May 28-31, 2001. PMAS is a dedicated, highly efficient UV-visual integral field spectrograph which is optimized for the spectrophotometry of faint point sources, typically superimposed on a bright background. PMAS is ideally suited for the study of resolved stars in local group galaxies. I will present results of our preliminary work with MPFS at the Russian 6m Telescope in Selentchuk, involving the development of new 3D data reduction software, and observations of faint planetary nebulae in the bulge of M31 for the determination of individual chemical abundances of these objects. Using this data, it will be demonstrated that integral field spectroscopy provides superior techniques for background subtraction, avoiding the otherwise inevitable systematic errors of conventional slit spetroscopy. The results will be put in perspective of the study of resolved stellar populations in nearby galaxies with a new generation of Extremely Large Telescopes.

  13. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  14. Combining different modalities for 3D imaging of biological objects

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  15. Combining Different Modalities for 3D Imaging of Biological Objects

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  16. Holoscopic 3D image depth estimation and segmentation techniques

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  17. Object detection using categorised 3D edges

    Kiforenko, Lilita; Buch, Anders Glent; Bodenhagen, Leon; Krüger, Norbert

    2015-01-01

    is made possible by the explicit use of edge categories in the feature descriptor. We quantitatively compare our approach with the state-of-the-art template based Linemod method, which also provides an effective way of dealing with texture-less objects, tests were performed on our own object dataset......In this paper we present an object detection method that uses edge categorisation in combination with a local multi-modal histogram descriptor, all based on RGB-D data. Our target application is robust detection and pose estimation of known objects. We propose to apply a recently introduced edge...... categorisation algorithm for describing objects in terms of its different edge types. Relying on edge information allow our system to deal with objects with little or no texture or surface variation. We show that edge categorisation improves matching performance due to the higher level of discrimination, which...

  18. Advanced 3D Object Identification System Project

    National Aeronautics and Space Administration — During the Phase I effort, OPTRA developed object detection, tracking, and identification algorithms and successfully tested these algorithms on computer-generated...

  19. Depth enhancement of S3D content and the psychological effects

    Hirahara, Masahiro; Shiraishi, Saki; Kawai, Takashi

    2012-03-01

    Stereoscopic 3D (S3D) imaging technologies are widely used recently to create content for movies, TV programs, games, etc. Although S3D content differs from 2D content by the use of binocular parallax to induce depth sensation, the relationship between depth control and the user experience remains unclear. In this study, the user experience was subjectively and objectively evaluated in order to determine the effectiveness of depth control, such as an expansion or reduction or a forward or backward shift in the range of maximum parallactic angles in the cross and uncross directions (depth bracket). Four types of S3D content were used in the subjective and objective evaluations. The depth brackets of comparison stimuli were modified in order to enhance the depth sensation corresponding to the content. Interpretation Based Quality (IBQ) methodology was used for the subjective evaluation and the heart rate was measured to evaluate the physiological effect. The results of the evaluations suggest the following two points. (1) Expansion/reduction of the depth bracket affects preference and enhances positive emotions to the S3D content. (2) Expansion/reduction of the depth bracket produces above-mentioned effects more notable than shifting the cross/uncross directions.

  20. 3D hand tracking using Kalman filter in depth space

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  1. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  2. Depth-based Multi-View 3D Video Coding

    Zamarin, Marco

    improved, both in terms of objective and visual evaluations. Depth coding based on standard H.264/AVC is explored for multi-view plus depth image coding. A single depth map is used to disparity-compensate multiple views and allow more efficient coding than H.264 MVC at low bit rates. Lossless coding of...... number of standard solutions for lossless coding. New approaches for distributed video-plus-depth coding are also presented in this thesis. Motion correlation between the two signals is exploited at the decoder side to improve the performance of the side information generation algorithm. In addition...... on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode...

  3. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  4. Several Strategies on 3D Modeling of Manmade Objects

    SHAO Zhenfeng; LI Deren; CHENG Qimin

    2004-01-01

    Several different strategies of 3D modeling are adopted for different kinds of manmade objects. Firstly, for those manmade objects with regular structure, if 2D information is available and elevation information can be obtained conveniently, then 3D modeling of them can be executed directly. Secondly, for those manmade objects with complicated structure comparatively and related stereo images pair can be acquired, in the light of topology-based 3D model we finish 3D modeling of them by integrating automatic and semi-automatic object extraction. Thirdly, for the most complicated objects whose geometrical information cannot be got from stereo images pair completely, we turn to topological 3D model based on CAD.

  5. Automation of 3D micro object handling process

    Gegeckaite, Asta; Hansen, Hans Nørgaard

    2007-01-01

    Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due to the small absolute scale. In this article, the results of the pick-and-place operations of three different 3D micro objects were investigated. This study shows that depending on the correct gripping t...

  6. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  7. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    Thabet, Ali Kassem

    2015-04-16

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

  8. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  9. Efficient and high speed depth-based 2D to 3D video conversion

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  10. Object Recognition Using a 3D RFID System

    Roh, Se-gon; Choi, Hyouk Ryeol

    2009-01-01

    Up to now, object recognition in robotics has been typically done by vision, ultrasonic sensors, laser ranger finders etc. Recently, RFID has emerged as a promising technology that can strengthen object recognition. In this chapter, the 3D RFID system and the 3D tag were presented. The proposed RFID system can determine if an object as well as other tags exists, and also can estimate the orientation and position of the object. This feature considerably reduces the dependence of the robot on o...

  11. Monocular model-based 3D tracking of rigid objects

    Lepetit, Vincent

    2014-01-01

    Many applications require tracking complex 3D objects. These include visual serving of robotic arms on specific target objects, Augmented Reality systems that require real time registration of the object to be augmented, and head tracking systems that sophisticated interfaces can use. Computer vision offers solutions that are cheap, practical and non-invasive. ""Monocular Model-Based 3D Tracking of Rigid Objects"" reviews the different techniques and approaches that have been developed by industry and research. First, important mathematical tools are introduced: camera representation, robust e

  12. Embedding objects during 3D printing to add new functionalities.

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  13. A QUALITY ASSESSMENT METHOD FOR 3D ROAD POLYGON OBJECTS

    L. Gao

    2015-08-01

    Full Text Available With the development of the economy, the fast and accurate extraction of the city road is significant for GIS data collection and update, remote sensing images interpretation, mapping and spatial database updating etc. 3D GIS has attracted more and more attentions from academics, industries and governments with the increase of requirements for interoperability and integration of different sources of data. The quality of 3D geographic objects is very important for spatial analysis and decision-making. This paper presents a method for the quality assessment of the 3D road polygon objects which is created by integrating 2D Road Polygon data with LiDAR point cloud and other height information such as Spot Height data in Hong Kong Island. The quality of the created 3D road polygon data set is evaluated by the vertical accuracy, geometric and attribute accuracy, connectivity error, undulation error and completeness error and the final results are presented.

  14. A Primitive-Based 3D Object Recognition System

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  15. Semantic 3D object maps for everyday robot manipulation

    Rusu, Radu Bogdan

    2013-01-01

    The book written by Dr. Radu B. Rusu presents a detailed description of 3D Semantic Mapping in the context of mobile robot manipulation. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models that include the objects present in the world, together with their position, form, and other semantic aspects, as well as interpretations of these objects with respect to the robot tasks.   The book proposes novel 3D feature representations called Point Feature Histograms (PFH), as well as frameworks for the acquisition and processing of Semantic 3D Object Maps with contributions to robust registration, fast segmentation into regions, and reliable object detection, categorization, and reconstruction. These contributions have been fully implemented and empirically evaluated on different robotic systems, and have been the original kernel to the widely successful open-source project the Point Cloud Library (PCL) -- see http://poi...

  16. Automation of 3D micro object handling process

    Gegeckaite, Asta; Hansen, Hans Nørgaard

    2007-01-01

    Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due to ...

  17. Depth map coding using residual segmentation for 3D video system

    Lee, Cheon; Ho, Yo-Sung

    2013-06-01

    Advanced 3D video systems employ multi-view video-plus-depth data to support the free-viewpoint navigation and comfortable 3D viewing; thus efficient depth map coding becomes an important issue. Unlike the color image, the depth map has a property that depth values of the inner part of an object are monotonic, but those of object boundaries change abruptly. Therefore, residual data generated by prediction errors around object boundaries consume many bits in depth map coding. Representing them with segment data can be better than the use of the conventional transformation around the boundary regions. In this paper, we propose an efficient depth map coding method using a residual segmentation instead of using transformation. The proposed residual segmentation divides residual data into two regions with a segment map and two mean values. If the encoder selects the proposed method in terms of rates, two quantized mean values and an index of the segment map are transmitted. Simulation results show significant gains of up to 10 dB compared to the state-of-the-art coders, such as JPEG2000 and H.264/AVC. [Figure not available: see fulltext.

  18. Modeling real conditions of 'Ukrytie' object in 3D measurement

    The article covers a technology of creation on soft products basis for designing: AutoCad, and computer graphics and animation 3D Studio, 3DS MAX, of 3D model of geometrical parameters of current conditions of building structures, technological equipment, fuel-containing materials, concrete, water of ruined Unit 4, 'Ukryttia' object, of Chernobyl NPP. The model built using the above technology will be applied in the future as a basis when automating the design and computer modeling of processes at the 'Ukryttia' object

  19. Algorithms for Haptic Rendering of 3D Objects

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  20. Tracking objects in 3D using Stereo Vision

    Endresen, Kai Hugo Hustoft

    2010-01-01

    This report describes a stereo vision system to be used on a mobile robot. The system is able to triangulate the positions of cylindrical and spherical objects in a 3D environment. Triangulation is done in real-time by matching regions in two images, and calculating the disparities between them.

  1. Radiographic Imagery of a Variable Density 3D Object

    Justin Stottlemyer

    2010-01-01

    Full Text Available The purpose of this project is to develop a mathematical model to study 4D (three spatial dimensions plus density shapes using 3D projections. In the model, the projection is represented as a function that can be applied to data produced by a radiation detector. The projection is visualized as a three-dimensional graph where x and y coordinates represent position and the z coordinate corresponds to the object's density and thickness. Contour plots of such 3D graphs can be used to construct traditional 2D radiographic images.

  2. 3-D Object Recognition from Point Cloud Data

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  3. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  4. Human efficiency for recognizing 3-D objects in luminance noise.

    Tjan, B S; Braje, W L; Legge, G E; Kersten, D

    1995-11-01

    The purpose of this study was to establish how efficiently humans use visual information to recognize simple 3-D objects. The stimuli were computer-rendered images of four simple 3-D objects--wedge, cone, cylinder, and pyramid--each rendered from 8 randomly chosen viewing positions as shaded objects, line drawings, or silhouettes. The objects were presented in static, 2-D Gaussian luminance noise. The observer's task was to indicate which of the four objects had been presented. We obtained human contrast thresholds for recognition, and compared these to an ideal observer's thresholds to obtain efficiencies. In two auxiliary experiments, we measured efficiencies for object detection and letter recognition. Our results showed that human object-recognition efficiency is low (3-8%) when compared to efficiencies reported for some other visual-information processing tasks. The low efficiency means that human recognition performance is limited primarily by factors intrinsic to the observer rather than the information content of the stimuli. We found three factors that play a large role in accounting for low object-recognition efficiency: stimulus size, spatial uncertainty, and detection efficiency. Four other factors play a smaller role in limiting object-recognition efficiency: observers' internal noise, stimulus rendering condition, stimulus familiarity, and categorization across views. PMID:8533342

  5. Surface reconstruction of 3D objects in computerized tomography

    This paper deals with the problem of surface reconstruction of 3D objects from their boundaries in a family of slice images in computerized tomography (CT). Its mathematical formulation is first given, in which it is considered as a problem of functional minimization. Next, the corresponding Euler partial differential equation is derived and it is then solved by the finite difference method. Numerical solution can be found by using the iterative method

  6. Knowledge Base Approach for 3D Objects Detection in Point Clouds Using 3D Processing and Specialists Knowledge

    Ben Hmida, Helmi; Cruz, Christophe; Boochs, Frank; Nicolle, Christophe

    2013-01-01

    International audience This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist's knowledge. Here, we share our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. Thi...

  7. Manipulating 3D Objects with Gaze and Hand Gestures

    Koskenranta, Olli

    2012-01-01

    Gesture-based interaction in consumer electronics is becoming more popular these days, for example, when playing games with Microsoft Kinect, PlayStation 3 Move and Nintendo Wii. The objective of this thesis was to find out how to use gaze and hand gestures for manipulating objects in a 3D space for the best user experience possible. This thesis was made at the University of Oulu, Center for Internet Excellence and was a part of the research project “Chiru”. The goal was to research and p...

  8. Response of 3D Free Rigid Objects under Seismic Excitations

    Yanheng, Li

    2008-01-01

    Previous studies of precariously balanced structures in seismically active regions to provide important information for aseismatic engineering and theoretical seismology are almost found on an oversimplified assumption. According to that, any 3-dimensional practical structure with special symmetry could be regarded as a 2-dimensional finite object in light of the corresponding symmetry. Thus the complex and troublesome problem of 3D rotation, in mathematics, can be reduced to a tractable one of 1D rotation but a distorted description of the real motion in physics. To gain an actual evolution of precariously balanced structures bearing various levels of ground accelerations, we should address ourselves to a 3D calculation. In this study, the responses of a cylinder under a set of half- and full-sine-wave excitations with different frequencies related to seismic ground motion are investigated in virtue of some reasonable works from a number of mechanicians. A computer program is also developed possibly to study...

  9. Weighted Unsupervised Learning for 3D Object Detection

    Kamran Kowsari

    2016-01-01

    Full Text Available This paper introduces a novel weighted unsuper-vised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene.

  10. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  11. A new method to create depth information based on lighting analysis for 2D/3D conversion

    Hyunho; Han; Gangseong; Lee; Jongyong; Lee; Jinsoo; Kim; Sanghun; Lee

    2013-01-01

    A new method creating depth information for 2D/3D conversion was proposed. The distance between objects is determined by the distances between objects and light source position which is estimated by the analysis of the image. The estimated lighting value is used to normalize the image. A threshold value is determined by some weighted operation between the original image and the normalized image. By applying the threshold value to the original image, background area is removed. Depth information of interested area is calculated from the lighting changes. The final 3D images converted with the proposed method are used to verify its effectiveness.

  12. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  13. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  14. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  15. Performance Evaluating of some Methods in 3D Depth Reconstruction from a Single Image

    Wen, Wei

    2009-01-01

    We studied the problem of 3D reconstruction from a single image. The 3D reconstruction is one of the basic problems in Computer Vision. The 3D reconstruction is usually achieved by using two or multiple images of a scene. However recent researches in Computer Vision field have enabled us to recover the 3D information even from only one single image. The methods used in such reconstructions are based on depth information, projection geometry, image content, human psychology and so on. Each met...

  16. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  17. A Prototypical 3D Graphical Visualizer for Object-Oriented Systems

    1996-01-01

    is paper describes a framework for visualizing object-oriented systems within a 3D interactive environment.The 3D visualizer represents the structure of a program as Cylinder Net that simultaneously specifies two relationships between objects within 3D virtual space.Additionally,it represents additional relationships on demand when objects are moved into local focus.The 3D visualizer is implemented using a 3D graphics toolkit,TOAST,that implements 3D Widgets 3D graphics to ease the programming task for 3D visualization.

  18. Incipit 3D documentations projects: some examples and objectives

    Mañana-Borrazás, Patricia

    2013-01-01

    Presentación de la autora y del Incipit y su orientación respecto al uso de nuevas tecnologías aplicadas a la documentación 3D del patrimonio, con especial atención a los retos que supone este tipo de tecnologías en la “Virtual Heritage School on Digital Cultural Heritage 2013 (3D documentation, knowledge repositories and creative industries)” Nicosia 30 de mayo de 2013.

  19. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  20. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed met...

  1. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  2. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  3. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  4. Depth-Based Object Tracking Using a Robust Gaussian Filter

    Issac, Jan; Wüthrich, Manuel; Cifuentes, Cristina Garcia; Bohg, Jeannette; Trimpe, Sebastian; Schaal, Stefan

    2016-01-01

    We consider the problem of model-based 3D-tracking of objects given dense depth images as input. Two difficulties preclude the application of a standard Gaussian filter to this problem. First of all, depth sensors are characterized by fat-tailed measurement noise. To address this issue, we show how a recently published robustification method for Gaussian filters can be applied to the problem at hand. Thereby, we avoid using heuristic outlier detection methods that simply reject measurements i...

  5. Depth and Intensity Gabor Features Based 3D Face Recognition Using Symbolic LDA and AdaBoost

    P. S. Hiremath

    2013-11-01

    Full Text Available In this paper, the objective is to investigate what contributions depth and intensity information make to the solution of face recognition problem when expression and pose variations are taken into account, and a novel system is proposed for combining depth and intensity information in order to improve face recognition performance. In the proposed approach, local features based on Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in symbolic linear discriminant analysis (Symbolic LDA with AdaBoost learning is proposed to select the most effective and robust features and to construct a strong classifier. Experiments are performed on the three datasets, namely, Texas 3D face database, Bhosphorus 3D face database and CASIA 3D face database, which contain face images with complex variations, including expressions, poses and longtime lapses between two scans. The experimental results demonstrate the enhanced effectiveness in the performance of the proposed method. Since most of the design processes are performed automatically, the proposed approach leads to a potential prototype design of an automatic face recognition system based on the combination of the depth and intensity information in face images.

  6. A new method to enlarge a range of continuously perceived depth in DFD (depth-fused 3D) display

    Tsunakawa, Atsuhiro; Soumiya, Tomoki; Horikawa, Yuta; Yamamoto, Hirotsugu; Suyama, Shiro

    2013-03-01

    We can successfully solve the problem in DFD display that the maximum depth difference of front and rear planes is limited because depth fusing from front and rear images to one 3-D image becomes impossible. The range of continuously perceived depth was estimated as depth difference of front and rear planes increases. When the distance was large enough, perceived depth was near front plane at 0~40 % of rear luminance and near rear plane at 60~100 % of rear luminance. This maximum depth range can be successfully enlarged by spatial-frequency modulation of front and rear images. The change of perceived depth dependence was evaluated when high frequency component of front and rear images is cut off using Fourier Transformation at the distance between front and rear plane of 5 and 10 cm (4.9 and 9.4 minute of arc). When high frequency component does not cut off enough at the distance of 5 cm, perceived depth was separated to near front plane and near rear plane. However, when the images are blurred enough by cutting high frequency component, the perceived depth has a linear dependency on luminance ratio. When the images are not blurred at the distance of 10 cm, perceived depth is separated to near front plane at 0~30% of rear luminance, near rear plane at 80~100 % and near midpoint at 40~70 %. However, when the images are blurred enough, perceived depth successfully has a linear dependency on luminance ratio.

  7. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  8. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  9. A joint multi-view plus depth image coding scheme based on 3D-warping

    Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone;

    2011-01-01

    scene structure that can be effectively exploited to improve the performance of multi-view coding schemes. In this paper we introduce a novel coding architecture that replaces the inter-view motion prediction operation with a 3D warping approach based on depth information to improve the coding...

  10. Learning the 3-D structure of objects from 2-D views depends on shape, not format.

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-05-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  11. Object-oriented urban 3D spatial data model organization method

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  12. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  13. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  14. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-06-01

    Full Text Available Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.

  15. A Taxonomy of 3D Occluded Objects Recognition Techniques

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  16. AFFINE INVARIANT OF 3D OBJECTS USING STATISTICAL AND ALGEBRAIC COEFFICIENTS

    lhachloufi Mostafa,

    2011-03-01

    Full Text Available The increasing number of objects 3D are available on the Internet or in specialized databases and require the establishment of methods to develop description and recognition techniques[1,2,3] to access intelligently to the contents of these objects . In this context, our work whose objective is to present an affine invariants methods [4,5] of 3D objects . The proposed methods based on the extraction of statistical and algebraic coefficients from the 3D object, these coefficients remain invariant against affine transformation of the 3D object. In this work, the 3D objects are transformations of 3D objects by one element of the overall transformation. The set of transformations considered here is the general affine group. The measure of similarity between two descriptor vector objects is achieved by a similarity function using the euclidean distance..

  17. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  18. Estimation of foot pressure from human footprint depths using 3D scanner

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  19. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    Dennis Edler

    Full Text Available Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems or additional artificial layers (coordinate grids, provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids and content-related, irregular line features (i.e. highways and main streets in official urban topographic maps (scale 1/10,000 further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate and the mean distances of correctly recalled objects (spatial accuracy. It is shown that the True-3D accentuating of grids (depth offset: 5 cm significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  20. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208

  1. A robotic assembly procedure using 3D object reconstruction

    Chrysostomou, Dimitrios; Bitzidou, Malamati; Gasteratos, Antonios

    The use of robotic systems for rapid manufacturing and intelligent automation has attracted growing interest in recent years. Specifically, the generation and planning of an object assembly sequence is becoming crucial as it can reduce significantly the production costs and accelerate the full...... implemented by a 5 d.o.f. robot arm and a gripper. The final goal is to plan a path for the robot arm, consisting of predetermined paths and motions for the automatic assembly of ordinary objects....

  2. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    Shouyi Yin

    2015-06-01

    Full Text Available In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video.

  3. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps.

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  4. Azimuth–opening angle domain imaging in 3D Gaussian beam depth migration

    Common-image gathers indexed by opening angle and azimuth at imaging points in 3D situations are the key inputs for amplitude-variation-with-angle and velocity analysis by tomography. The Gaussian beam depth migration, propagating each ray by a Gaussian beam form and summing the contributions from all the individual beams to produce the wavefield, can overcome the multipath problem, image steep reflectors and, even more important, provide a convenient and efficient strategy to extract azimuth–opening angle domain common-image gathers (ADCIGs) in 3D seismic imaging. We present a method for computing azimuth and opening angle at imaging points to output 3D ADCIGs by computing the source and receiver wavefield direction vectors which are restricted in the effective region of the corresponding Gaussian beams. In this paper, the basic principle of Gaussian beam migration (GBM) is briefly introduced; the technology and strategy to yield ADCIGs by GBM are analyzed. Numerical tests and field data application demonstrate that the azimuth–opening angle domain imaging method in 3D Gaussian beam depth migration is effective. (paper)

  5. Retrieval of Arbitrary 3D Objects From Robot Observations

    Bore, Nils; Jensfelt, Patric; Folkesson, John

    2015-01-01

    We have studied the problem of retrieval of arbi-trary object instances from a large point cloud data set. Thecontext is autonomous robots operating for long periods of time,weeks up to months and regularly saving point cloud data. Theever growing collection of data is stored in a way that allowsranking candidate examples of any query object, given in theform of a single view point cloud, without the need to accessthe original data. The top ranked ones can then be compared ina second phase us...

  6. RECONSTRUCCIÓN DE OBJETO 3D A PARTIR DE IMÁGENES CALIBRADAS 3D OBJECT RECONSTRUCTION WITH CALIBRATED IMAGES

    Natividad Grandón-Pastén

    2007-08-01

    Full Text Available Este trabajo presenta el desarrollo de un sistema de reconstrucción de objeto 3D, a partir de una colección de vistas. El sistema se compone de dos módulos principales. El primero realiza el procesamiento de imagen, cuyo objetivo es determinar el mapa de profundidad en un par de vistas, donde cada par de vistas sucesivas sigue una secuencia de fases: detección de puntos de interés, correspondencia de puntos y reconstrucción de puntos; en el proceso de reconstrucción se determinan los parámetros que describen el movimiento (matriz de rotación R y el vector de traslación T entre las dos vistas. Esta secuencia de pasos se repite para todos los pares de vista sucesivas del conjunto. El segundo módulo tiene como objetivo crear el modelo 3D del objeto, para lo cual debe determinar el mapa total de todos los puntos 3D generados; en cada iteración del módulo anterior, una vez obtenido el mapa de profundidad total, genera la malla 3D, aplicando el método de triangulación de Delaunay [28]. Los resultados obtenidos del proceso de reconstrucción son modelados en un ambiente virtual VRML para obtener una visualización más realista del objeto.The system is composed of two main modules. The first one, carries out the image prosecution, whose objective is to determine the depth map of a pair of views where each pair of successive views continues a sequence of phases: interest points detection, points correspondence and points reconstruction; in the reconstruction process, is determined the parameters that describe the movement (rotation matrix R and the translation vector T between the two views. This an sequence of steps is repeated for all the peers of successive views of the set. The second module has as objective to create the 3D model of the object, for it should determine the total map of all the 3D points generated, by each iteration of the previous module, once obtained the map of total depth generates the 3D netting, applying the

  7. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    Sturm, Peter; Maybank, Steve

    1999-01-01

    We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  8. Learning Spatial Relations between Objects From 3D Scenes

    Fichtl, Severin; Alexander, John; Guerin, Frank;

    2013-01-01

    Ongoing cognitive development during the first years of human life may be the result of a set of developmental mechanisms which are in continuous operation [1]. One such mechanism identified is the ability of the developing child to learn effective preconditions for their behaviours. It has been ...... suggested [2] that through the application of behaviours involving more than one object, infants begin to learn about the relations between objects.......Ongoing cognitive development during the first years of human life may be the result of a set of developmental mechanisms which are in continuous operation [1]. One such mechanism identified is the ability of the developing child to learn effective preconditions for their behaviours. It has been...

  9. Deeply Exploit Depth Information for Object Detection

    Hou, Saihui; Wang, Zilei; Wu, Feng

    2016-01-01

    This paper addresses the issue on how to more effectively coordinate the depth with RGB aiming at boosting the performance of RGB-D object detection. Particularly, we investigate two primary ideas under the CNN model: property derivation and property fusion. Firstly, we propose that the depth can be utilized not only as a type of extra information besides RGB but also to derive more visual properties for comprehensively describing the objects of interest. So a two-stage learning framework con...

  10. Gravity data inversion as a probe for the 3D shape at depth of granitic bodies

    Granitic intrusions represent potential sites for waste disposal. A well constrained determination of their geometry at depth is of importance to evaluate possible leakage and seepage within the surroundings. Among geophysical techniques, gravity remains the best suited method to investigate the 3D shape of the granitic bodies at depth. During uranium exploration programmes, many plutons emplaced within different geochemical and tectonic environment have been surveyed. The quality of gravity surveying depends on the intrinsic accuracy of the measurements, and also on their density of coverage. A regularly spaced and dense coverage (about 1 point/km2) of measurements over the whole pluton and its nearby surroundings is needed to represent the gravity effect of density variations. This yields a lateral resolution of about 0.5 kilometer, or less depending on depth and roughness of the floor, for the interpretation of the Bouguer anomaly map. We recommend the use of a 3D iterative method of data inversion, simpler to run when the geometry and distribution of the sources are already constrained by surface data. This method must take into account the various density changes within the granite and its surroundings, as well as the regional effect of deep regional sources. A total error in the input data (measurements, densities, regional field) is estimated at 6%. We estimate that the total uncertainty on the calculated depth values does not exceed ± 15%. Because of good coverage of gravity measurements, the overall shape of the pluton is certainly better constrained than the depth values themselves. We present several examples of gravity data inversion over granitic intrusions displaying various 3D morphologies. At a smaller scale mineralizations are also observed above or close to the root zones. Those examples demonstrate the adequacy of joint studies in constraining the mode of magma emplacement before further studies focussing to environmental problems. 59 refs, 9

  11. 3D Spectroscopy of Herbig-Haro objects

    López, R; Exter, K M; García-Lorenzo, B; Gómez, G; Meteorologia, D A; Riera, A; Sánchez, S F; Meteorologia, Departament d'Astronomia i

    2005-01-01

    HH 110 and HH 262 are two Herbig-Haro jets with rather peculiar, chaotic morphology. In the two cases, no source suitable to power the jet has been detected along the outflow, at optical or radio wavelengths. Both, previous data and theoretical models, suggest that these objects are tracing an early stage of an HH jet/dense cloud interaction. We present the first results of the integral field spectroscopy observations made with the PMAS spectrophotometer (with the PPAK configuration) of these two turbulent jets. New data of the kinematics in several characteristic HH emission lines are shown. In addition, line-ratio maps have been made, suitable to explore the spatial excitation an density conditions of the jets as a function of their kinematics.

  12. A Normalization Method of Moment Invariants for 3D Objects on Different Manifolds

    HU Ping; XU Dong; LI Hua

    2014-01-01

    3D objects can be stored in computer of different describing ways, such as point set, polyline, polygonal surface and Euclidean distance map. Moment invariants of different orders may have the different magnitude. A method for normalizing moments of 3D objects is proposed, which can set the values of moments of different orders roughly in the same range and be applied to different 3D data formats universally. Then accurate computation of moments for several objects is presented and experiments show that this kind of normalization is very useful for moment invariants in 3D objects analysis and recognition.

  13. Monocular display unit for 3D display with correct depth perception

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  14. Rapid and Inexpensive Reconstruction of 3D Structures for Micro-Objects Using Common Optical Microscopy

    Berejnov, V V

    2009-01-01

    A simple method of constructing the 3D surface of non-transparent micro-objects by extending the depth-of-field on the whole attainable surface is presented. The series of images of a sample are recorded by the sequential movement of the sample with respect to the microscope focus. The portions of the surface of the sample appear in focus in the different images in the series. The indexed series of the in-focus portions of the sample surface is combined in one sharp 2D image and interpolated into the 3D surface representing the surface of an original micro-object. For an image acquisition and processing we use a conventional upright stage microscope that is operated manually, the inexpensive Helicon Focus software, and the open source MeshLab software. Three objects were tested: an inclined flat glass slide with an imprinted 10 um calibration grid, a regular metal 100x100 per inch mesh, and a highly irregular surface of a material known as a porous electrode used in polyelectrolyte fuel cells. The accuracy of...

  15. Single-pixel 3D imaging with time-based depth resolution

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  16. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  17. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale

  18. An object-oriented 3D integral data model for digital city and digital mine

    Wu, Lixin; Wang, Yanbing; Che, Defu; Xu, Lei; Chen, Xuexi; Jiang, Yun; Shi, Wenzhong

    2005-10-01

    With the rapid development of urban, city space extended from surface to subsurface. As the important data source for the representation of city spatial information, 3D city spatial data have the characteristics of multi-object, heterogeneity and multi-structure. It could be classified referring to the geo-surface into three kinds: above-surface data, surface data and subsurface data. The current research on 3D city spatial information system is divided naturally into two different branch, 3D City GIS (3D CGIS) and 3D Geological Modeling (3DGM). The former emphasizes on the 3D visualization of buildings and the terrain of city, while the latter emphasizes on the visualization of geological bodies and structures. Although, it is extremely important for city planning and construction to integrate all the city spatial information including above-surface, surface and subsurface objects to conduct integral analysis and spatial manipulation. However, either 3D CGIS or 3DGM is currently difficult to realize the information integration, integral analysis and spatial manipulation. Considering 3D spatial modeling theory and methodologies, an object-oriented 3D integral spatial data model (OO3D-ISDM) is presented and software realized. The model integrates geographical objects, surface buildings and geological objects together seamlessly with TIN being its coupling interface. This paper introduced the conceptual model of OO3D-ISDM, which is comprised of 4 spatial elements, i.e. point, line, face and body, and 4 geometric primitives, i.e. vertex, segment, triangle and generalized tri-prism (GTP). The spatial model represents the geometry of surface buildings and geographical objects with triangles, and geological objects with GTP. Any of the represented objects, no mater surface buildings, terrain or subsurface objects, could be described with the basic geometry element, i.e. triangle. So the 3D spatial objects, surface buildings, terrain and geological objects can be

  19. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    Chien-Ho Ko

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the alg...

  20. Multi-layer 3D imaging using a few viewpoint images and depth map

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  1. Interlayer Simplified Depth Coding for Quality Scalability on 3D High Efficiency Video Coding

    Mengmeng Zhang

    2014-01-01

    Full Text Available A quality scalable extension design is proposed for the upcoming 3D video on the emerging standard for High Efficiency Video Coding (HEVC. A novel interlayer simplified depth coding (SDC prediction tool is added to reduce the amount of bits for depth maps representation by exploiting the correlation between coding layers. To further improve the coding performance, the coded prediction quadtree and texture data from corresponding SDC-coded blocks in the base layer can be used in interlayer simplified depth coding. In the proposed design, the multiloop decoder solution is also extended into the proposed scalable scenario for texture views and depth maps, and will be achieved by the interlayer texture prediction method. The experimental results indicate that the average Bjøntegaard Delta bitrate decrease of 54.4% can be gained in interlayer simplified depth coding prediction tool on multiloop decoder solution compared with simulcast. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  2. An Overview of 3d Topology for Ladm-Based Objects

    Zulkifli, N. A.; Rahman, A. A.; van Oosterom, P.

    2015-10-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological models are based on several main aspects (e.g. space or plane partition, used primitives, constructive rules, orientation and explicit or implicit relationships). The most suitable 3D topological model depends on the type of application it is used for. There is no single 3D topology model best suitable for all types of applications. Therefore, it is very important to define the requirements of the 3D topology model. The context of this paper is a 3D topology for LADM-based objects.

  3. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  4. Constructing Isosurfaces from 3D Data Sets Taking Account of Depth Sorting of Polyhedra

    周勇; 唐泽圣

    1994-01-01

    Creating and rendering intermediate geometric primitives is one of the approaches to visualisze data sets in 3D space.Some algorithms have been developed to construct isosurface from uniformly distributed 3D data sets.These algorithms assume that the function value varies linearly along edges of each cell.But to irregular 3D data sets,this assumption is inapplicable.Moreover,the detth sorting of cells is more complicated for irregular data sets,which is indispensable for generating isosurface images or semitransparent isosurface images,if Z-buffer method is not adopted.In this paper,isosurface models based on the assumption that the function value has nonlinear distribution within a tetrahedron are proposed.The depth sorting algorithm and data structures are developed for the irregular data sets in which cells may be subdivided into tetrahedra.The implementation issues of this algorithm are discussed and experimental results are shown to illustrate potentials of this technique.

  5. IMPROVEMENT OF 3D MONTE CARLO LOCALIZATION USING A DEPTH CAMERA AND TERRESTRIAL LASER SCANNER

    S. Kanai

    2015-05-01

    Full Text Available Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  6. Temporal-spatial modeling of fast-moving and deforming 3D objects

    Wu, Xiaoliang; Wei, Youzhi

    1998-09-01

    This paper gives a brief description of the method and techniques developed for the modeling and reconstruction of fast moving and deforming 3D objects. A new approach using close-range digital terrestrial photogrammetry in conjunction with high speed photography and videography is proposed. A sequential image matching method (SIM) has been developed to automatically process pairs of images taken continuously of any fast moving and deforming 3D objects. Using the SIM technique a temporal-spatial model (TSM) of any fast moving and deforming 3D objects can be developed. The TSM would include a series of reconstructed surface models of the fast moving and deforming 3D object in the form of 3D images. The TSM allows the 3D objects to be visualized and analyzed in sequence. The SIM method, specifically the left-right matching and forward-back matching techniques are presented in the paper. An example is given which deals with the monitoring of a typical blast rock bench in a major open pit mine in Australia. With the SIM approach and the TSM model it is possible to automatically and efficiently reconstruct the 3D images of the blasting process. This reconstruction would otherwise be impossible to achieve using a labor intensive manual processing approach based on 2D images taken from conventional high speed cameras. The case study demonstrates the potential of the SIM approach and the TSM for the automatic identification, tracking and reconstruction of any fast moving and deforming 3D targets.

  7. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    J. Javier Yebes; Bergasa, Luis M.; Miguel García-Garrido

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban sce...

  8. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  9. Visualization of the ROOT 3D class objects with OpenInventor-like viewers

    The class library for conversion of the ROOT 3D class objects to the .iv format for 3D image viewers is described in this paper. At present the library was tested using the STAR and ATLAS detector geometry without any changes and revision for concrete detector

  10. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  11. Novel 3-D Object Recognition Methodology Employing a Curvature-Based Histogram

    Liang-Chia Chen

    2013-07-01

    Full Text Available In this paper, a new object recognition algorithm employing a curvature-based histogram is presented. Recognition of three-dimensional (3-D objects using range images remains one of the most challenging problems in 3-D computer vision due to its noisy and cluttered scene characteristics. The key breakthroughs for this problem mainly lie in defining unique features that distinguish the similarity among various 3-D objects. In our approach, an object detection scheme is developed to identify targets underlining an automated search in the range images using an initial process of object segmentation to subdivide all possible objects in the scenes and then applying a process of object recognition based on geometric constraints and a curvature-based histogram for object recognition. The developed method has been verified through experimental tests for its feasibility confirmation.

  12. 3D reconstruction in PET cameras with irregular sampling and depth of interaction

    We present 3D reconstruction algorithms that address fully 3D tomographic reconstruction using septa-less, stationary, and rectangular cameras. The field of view (FOV) encompasses the entire volume enclosed by detector modules capable of measuring depth of interaction (DOI). The Filtered Backprojection based algorithms incorporate DOI, accommodate irregular sampling, and minimize interpolation in the data by defining lines of response between the measured interaction points. We use fixed-width, evenly spaced radial bins in order to use the FFT, but use irregular angular sampling to minimize the number of unnormalizable zero efficiency sinogram bins. To address persisting low efficiency bins, we perform 2D nearest neighbor radial smoothing, employ a semi-iterative procedure to estimate the unsampled data, and mash the ''in plane'' and the first oblique projections to reconstruct the 2D image in the 3DRP algorithm. We present artifact free, essentially spatially isotropic images of Monte Carlo data with FWHM resolutions o 1.50 mm. 2.25 mm, and 3.00 mm at the center, in the bulk, and at the edges and corners of the FOV respectively

  13. Plasma penetration depth and mechanical properties of atmospheric plasma-treated 3D aramid woven composites

    Three-dimensional aramid woven fabrics were treated with atmospheric pressure plasmas, on one side or both sides to determine the plasma penetration depth in the 3D fabrics and the influences on final composite mechanical properties. The properties of the fibers from different layers of the single side treated fabrics, including surface morphology, chemical composition, wettability and adhesion properties were investigated using scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), contact angle measurement and microbond tests. Meanwhile, flexural properties of the composites reinforced with the fabrics untreated and treated on both sides were compared using three-point bending tests. The results showed that the fibers from the outer most surface layer of the fabric had a significant improvement in their surface roughness, chemical bonding, wettability and adhesion properties after plasma treatment; the treatment effect gradually diminished for the fibers in the inner layers. In the third layer, the fiber properties remained approximately the same to those of the control. In addition, three-point bending tests indicated that the 3D aramid composite had an increase of 11% in flexural strength and 12% in flexural modulus after the plasma treatment. These results indicate that composite mechanical properties can be improved by the direct fabric treatment instead of fiber treatment with plasmas if the fabric is less than four layers thick

  14. Im2Fit: Fast 3D Model Fitting and Anthropometrics using Single Consumer Depth Camera and Synthetic Data

    Wang, Qiaosong; Jagadeesh, Vignesh; Ressler, Bryan; Piramuthu, Robinson

    2014-01-01

    Recent advances in consumer depth sensors have created many opportunities for human body measurement and modeling. Estimation of 3D body shape is particularly useful for fashion e-commerce applications such as virtual try-on or fit personalization. In this paper, we propose a method for capturing accurate human body shape and anthropometrics from a single consumer grade depth sensor. We first generate a large dataset of synthetic 3D human body models using real-world body size distributions. ...

  15. Intuitiveness 3D objects Interaction in Augmented Reality Using S-PI Algorithm

    Ajune Wanis Ismail

    2013-07-01

    Full Text Available Numbers of researchers have developed interaction techniques in Augmented Reality (AR application. Some of them proposed new technique for user interaction with different types of interfaces which could bring great promise for intuitive user interaction with 3D data naturally. This paper will explore the 3D object manipulation performs in single-point interaction (S-PI technique in AR environment. The new interaction algorithm, S-PI technique, is point-based intersection designed to detect the interaction’s behaviors such as translate, rotate, clone and for intuitive 3D object handling. The S-PI technique is proposed with marker-based tracking in order to improve the trade-off between the accuracy and speed in manipulating 3D object in real-time. The method is robust required to ensure both elements of real and virtual can be combined relative to the user’s viewpoints and reduce system lag.  

  16. The role of the foreshortening cue in the perception of 3D object slant.

    Ivanov, Iliya V; Kramer, Daniel J; Mullen, Kathy T

    2014-01-01

    Slant is the degree to which a surface recedes or slopes away from the observer about the horizontal axis. The perception of surface slant may be derived from static monocular cues, including linear perspective and foreshortening, applied to single shapes or to multi-element textures. It is still unclear the extent to which color vision can use these cues to determine slant in the absence of achromatic contrast. Although previous demonstrations have shown that some pictures and images may lose their depth when presented at isoluminance, this has not been tested systematically using stimuli within the spatio-temporal passband of color vision. Here we test whether the foreshortening cue from surface compression (change in the ratio of width to length) can induce slant perception for single shapes for both color and luminance vision. We use radial frequency patterns with narrowband spatio-temporal properties. In the first experiment, both a manual task (lever rotation) and a visual task (line rotation) are used as metrics to measure the perception of slant for achromatic, red-green isoluminant and S-cone isolating stimuli. In the second experiment, we measure slant discrimination thresholds as a function of depicted slant in a 2AFC paradigm and find similar thresholds for chromatic and achromatic stimuli. We conclude that both color and luminance vision can use the foreshortening of a single surface to perceive slant, with performances similar to those obtained using other strong cues for slant, such as texture. This has implications for the role of color in monocular 3D vision, and the cortical organization used in 3D object perception. PMID:24216007

  17. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric en-gineering geology,a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then,according to the classified thought of the object-oriented technique,the different 3D models of geological and engi-neering objects were realized based on the data structure,including terrain class,strata class,fault class,and limit class;and the modeling mechanism was alterna-tive. Finally,the 3D integrated model was established by Boolean operations be-tween 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification,the arbitrary slicing analysis of the 3D model,the geological analysis of the dam,and underground engineering. They provide powerful theoretical prin-ciples and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  18. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    ZHONG DengHua; LI MingChao; LIU Jie

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric engineering geology, a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then, according to the classified thought of the object-oriented technique, the different 3D models of geological and engineering objects were realized based on the data structure, including terrain class,strata class, fault class, and limit class; and the modeling mechanism was alternative. Finally, the 3D integrated model was established by Boolean operations between 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification, the arbitrary slicing analysis of the 3D model, the geological analysis of the dam, and underground engineering. They provide powerful theoretical principles and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  19. Liquid Phase 3D Printing for Quickly Manufacturing Metal Objects with Low Melting Point Alloy Ink

    Wang, Lei

    2014-01-01

    Conventional 3D printings are generally time-consuming and printable metal inks are rather limited. From an alternative way, we proposed a liquid phase 3D printing for quickly making metal objects. Through introducing metal alloys whose melting point is slightly above room temperature as printing inks, several representative structures spanning from one, two and three dimension to more complex patterns were demonstrated to be quickly fabricated. Compared with the air cooling in a conventional 3D printing, the liquid-phase-manufacturing offers a much higher cooling rate and thus significantly improves the speed in fabricating metal objects. This unique strategy also efficiently prevents the liquid metal inks from air oxidation which is hard to avoid otherwise in an ordinary 3D printing. Several key physical factors (like properties of the cooling fluid, injection speed and needle diameter, types and properties of the printing ink, etc.) were disclosed which would evidently affect the printing quality. In addit...

  20. The Object Projection Feature Estimation Problem in Unsupervised Markerless 3D Motion Tracking

    Quesada, Luis

    2011-01-01

    3D motion tracking is a critical task in many computer vision applications. Existing 3D motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on 3D motion tracking. 3D motion tracking systems that require no knowledge on the target object and run on a single low-budget camera require estimations of the object projection features (namely, area and position). In this paper, we define the object projection feature estimation problem and we present a novel 3D motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera, as installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a non-modeled unmarked object that may be non-rigid, non-convex, partially occluded, self occluded, or motion blurred, given that it is opaque, evenly colored, and enough contrasting with t...

  1. 3D photography in the objective analysis of volume augmentation including fat augmentation and dermal fillers.

    Meier, Jason D; Glasgold, Robert A; Glasgold, Mark J

    2011-11-01

    The authors present quantitative and objective 3D data from their studies showing long-term results with facial volume augmentation. The first study analyzes fat grafting of the midface and the second study presents augmentation of the tear trough with hyaluronic filler. Surgeons using 3D quantitative analysis can learn the duration of results and the optimal amount to inject, as well as showing patients results that are not demonstrable with standard, 2D photography. PMID:22004863

  2. Liquid Phase 3D Printing for Quickly Manufacturing Metal Objects with Low Melting Point Alloy Ink

    Wang, Lei; Jing LIU

    2014-01-01

    Conventional 3D printings are generally time-consuming and printable metal inks are rather limited. From an alternative way, we proposed a liquid phase 3D printing for quickly making metal objects. Through introducing metal alloys whose melting point is slightly above room temperature as printing inks, several representative structures spanning from one, two and three dimension to more complex patterns were demonstrated to be quickly fabricated. Compared with the air cooling in a conventional...

  3. Web based Interactive 3D Learning Objects for Learning Management Systems

    Stefan Hesse; Stefan Gumhold

    2012-01-01

    In this paper, we present an approach to create and integrate interactive 3D learning objects of high quality for higher education into a learning management system. The use of these resources allows to visualize topics, such as electro-technical and physical processes in the interior of complex devices. This paper addresses the challenge of combining rich interactivity and adequate realism with 3D exercise material for distance elearning.

  4. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Koeva, M. N.

    2016-01-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interact...

  5. 3D Imaging of Dielectric Objects Buried under a Rough Surface by Using CSI

    Evrim Tetik

    2015-01-01

    Full Text Available A 3D scalar electromagnetic imaging of dielectric objects buried under a rough surface is presented. The problem has been treated as a 3D scalar problem for computational simplicity as a first step to the 3D vector problem. The complexity of the background in which the object is buried is simplified by obtaining Green’s function of its background, which consists of two homogeneous half-spaces, and a rough interface between them, by using Buried Object Approach (BOA. Green’s function of the two-part space with planar interface is obtained to be used in the process. Reconstruction of the location, shape, and constitutive parameters of the objects is achieved by Contrast Source Inversion (CSI method with conjugate gradient. The scattered field data that is used in the inverse problem is obtained via both Method of Moments (MoM and Comsol Multiphysics pressure acoustics model.

  6. 3D Projection on Physical Objects: Design Insights from Five Real Life Cases

    Dalsgaard, Peter; Halskov, Kim

    2011-01-01

    have developed installations that employ 3D projection on physical objects. The installations have been developed in collaboration with external partners and have been put into use in real-life settings such as museums, exhibitions and interaction design laboratories. On the basis of these cases, we......3D projection on physical objects is a particular kind of Augmented Reality that augments a physical object by projecting digital content directly onto it, rather than by using a mediating device, such as a mobile phone or a head- mounted display. In this paper, we present five cases in which we...

  7. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  8. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe.

    Harris, EJ; Miller, NR; Bamber, JC; Symonds-Tayler, JR; Evans, PM

    2011-01-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevatio...

  9. Ultrasonic cleaning of 3D printed objects and Cleaning Challenge Devices

    Verhaagen, Bram; Zanderink, Thijs; Fernandez Rivas, David

    2016-01-01

    We report our experiences in the evaluation of ultrasonic cleaning processes of objects made with additive manufacturing techniques, specifically three-dimensional (3D) printers. These objects need to be cleaned of support material added during the printing process. The support material can be remov

  10. Autonomous 3D Modeling of Unknown Objects for Active Scene Exploration

    Kriegel, Simon

    2015-01-01

    The thesis Autonomous 3D Modeling of Unknown Objects for Active Scene Exploration presents an approach for efficient model generation of small-scale objects applying a robot-sensor system. Active scene exploration incorporates object recognition methods for analyzing a scene of partially known objects as well as exploration approaches for autonomous modeling of unknown parts. Here, recognition, exploration, and planning methods are extended and combined in a single scene exploration system, e...

  11. Accurate 3D shape measurement of multiple separate objects with stereo vision

    3D shape measurement has emerged as a very useful tool in numerous fields because of its wide and ever-increasing applications. In this paper, we present a passive, fast and accurate 3D shape measurement technique using stereo vision approach. The technique first employs a scale-invariant feature transform algorithm to detect point matches at a number of discrete locations despite the discontinuities in the images. Then an automated image registration algorithm is applied to find full-field point matches with subpixel accuracy. After that, the 3D shapes of the objects can be reconstructed according to the obtained point matching and the camera information. The proposed technique is capable of performing a full-field 3D shape measurement with high accuracy even in the presence of discontinuities and multiple separate regions. The validity is verified by experiments. (paper)

  12. Generation of geometric representations of 3D objects in CAD/CAM by digital photogrammetry

    Li, Rongxing

    This paper presents a method for the generation of geometric representations of 3D objects by digital photogrammetry. In CAD/CAM systems geometric modelers are usually used to create three-dimensional (3D) geometric representations for design and manufacturing purposes. However, in cases where geometric information such as dimensions and shapes of objects are not available, measurements of physically existing objects become necessary. In this paper, geometric parameters of primitives of 3D geometric representations such as Boundary Representation (B-rep), Constructive Solid Geometry (CSG), and digital surface models are determined by digital image matching techniques. An algorithm for reconstruction of surfaces with discontinuities is developed. Interfaces between digital photogrammetric data and these geometric representations are realized. This method can be applied to design and manufacturing in mechanical engineering, automobile industry, robot technology, spatial information systems and others.

  13. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core–shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  14. High-purity 3D nano-objects grown by focused-electron-beam induced deposition.

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices. PMID:27454835

  15. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  16. 3D-Web-GIS RFID location sensing system for construction objects.

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  17. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0–35 mm s−1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s−1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue

  18. Digital Curvatures Applied to 3D Object Analysis and Recognition: A Case Study

    Chen, Li

    2009-01-01

    In this paper, we propose using curvatures in digital space for 3D object analysis and recognition. Since direct adjacency has only six types of digital surface points in local configurations, it is easy to determine and classify the discrete curvatures for every point on the boundary of a 3D object. Unlike the boundary simplicial decomposition (triangulation), the curvature can take any real value. It sometimes makes difficulties to find a right value for threshold. This paper focuses on the global properties of categorizing curvatures for small regions. We use both digital Gaussian curvatures and digital mean curvatures to 3D shapes. This paper proposes a multi-scale method for 3D object analysis and a vector method for 3D similarity classification. We use these methods for face recognition and shape classification. We have found that the Gaussian curvatures mainly describe the global features and average characteristics such as the five regions of a human face. However, mean curvatures can be used to find ...

  19. 3D high-efficiency video coding for multi-view video and depth data.

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  20. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  1. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  2. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  3. Fast error simulation of optical 3D measurements at translucent objects

    Lutzke, P.; Kühmstedt, P.; Notni, G.

    2012-09-01

    The scan results of optical 3D measurements at translucent objects deviate from the real objects surface. This error is caused by the fact that light is scattered in the objects volume and is not exclusively reflected at its surface. A few approaches were made to separate the surface reflected light from the volume scattered. For smooth objects the surface reflected light is dominantly concentrated in specular direction and could only be observed from a point in this direction. Thus the separation either leads to measurement results only creating data for near specular directions or provides data from not well separated areas. To ensure the flexibility and precision of optical 3D measurement systems for translucent materials it is necessary to enhance the understanding of the error forming process. For this purpose a technique for simulating the 3D measurement at translucent objects is presented. A simple error model is shortly outlined and extended to an efficient simulation environment based upon ordinary raytracing methods. In comparison the results of a Monte-Carlo simulation are presented. Only a few material and object parameters are needed for the raytracing simulation approach. The attempt of in-system collection of these material and object specific parameters is illustrated. The main concept of developing an error-compensation method based on the simulation environment and the collected parameters is described. The complete procedure is using both, the surface reflected and the volume scattered light for further processing.

  4. Printing of metallic 3D micro-objects by laser induced forward transfer.

    Zenou, Michael; Kotler, Zvi

    2016-01-25

    Digital printing of 3D metal micro-structures by laser induced forward transfer under ambient conditions is reviewed. Recent progress has allowed drop on demand transfer of molten, femto-liter, metal droplets with a high jetting directionality. Such small volume droplets solidify instantly, on a nanosecond time scale, as they touch the substrate. This fast solidification limits their lateral spreading and allows the fabrication of high aspect ratio and complex 3D metal structures. Several examples of micron-scale resolution metal objects printed using this method are presented and discussed. PMID:26832524

  5. Vision-Guided Robot Control for 3D Object Recognition and Manipulation

    S. Q. Xie; Haemmerle, E.; Cheng, Y; Gamage, P

    2008-01-01

    Research into a fully automated vision-guided robot for identifying, visualising and manipulating 3D objects with complicated shapes is still undergoing major development world wide. The current trend is toward the development of more robust, intelligent and flexible vision-guided robot systems to operate in highly dynamic environments. The theoretical basis of image plane dynamics and robust image-based robot systems capable of manipulating moving objects still need further research. Researc...

  6. Steady-state particle tracking in the object-oriented regional groundwater model ZOOMQ3D

    Jackson, C.R.

    2002-01-01

    This report describes the development of a steady-state particle tracking code for use in conjunction with the object-oriented regional groundwater flow model, ZOOMQ3D (Jackson, 2001). Like the flow model, the particle tracking software, ZOOPT, is written using an object-oriented approach to promote its extensibility and flexibility. ZOOPT enables the definition of steady-state pathlines in three dimensions. Particles can be tracked in both the forward and reverse directions en...

  7. Visual object tracking in 3D with color based particle filter

    Barrera González, Pablo; Matellán Olivera, Vicente; Cañas, José María

    2005-01-01

    This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle lter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precise motion model. The observation model we have developed avoids the color ltering of the entire image. That and the MonteCarlo techniques inside the part...

  8. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  9. The potential of 3D techniques for cultural heritage object documentation

    Bitelli, Gabriele; Girelli, Valentina A.; Remondino, Fabio; Vittuari, Luca

    2007-01-01

    The generation of 3D models of objects has become an important research point in many fields of application like industrial inspection, robotics, navigation and body scanning. Recently the techniques for generating photo-textured 3D digital models have interested also the field of Cultural Heritage, due to their capability to combine high precision metrical information with a qualitative and photographic description of the objects. In fact this kind of product is a fundamental support for documentation, studying and restoration of works of art, until a production of replicas by fast prototyping techniques. Close-range photogrammetric techniques are nowadays more and more frequently used for the generation of precise 3D models. With the advent of automated procedures and fully digital products in the 1990s, it has become easier to use and cheaper, and nowadays a wide range of commercial software is available to calibrate, orient and reconstruct objects from images. This paper presents the complete process for the derivation of a photorealistic 3D model of an important basalt stela (about 70 x 60 x 25 cm) discovered in the archaeological site of Tilmen Höyük, in Turkey, dating back to 2nd mill. BC. We will report the modeling performed using passive and active sensors and the comparison of the achieved results.

  10. Object-shape recognition and 3D reconstruction from tactile sensor images.

    Khasnobish, Anwesha; Singh, Garima; Jati, Arindam; Konar, Amit; Tibarewala, D N

    2014-04-01

    This article presents a novel approach of edged and edgeless object-shape recognition and 3D reconstruction from gradient-based analysis of tactile images. We recognize an object's shape by visualizing a surface topology in our mind while grasping the object in our palm and also taking help from our past experience of exploring similar kind of objects. The proposed hybrid recognition strategy works in similar way in two stages. In the first stage, conventional object-shape recognition using linear support vector machine classifier is performed where regional descriptors features have been extracted from the tactile image. A 3D shape reconstruction is also performed depending upon the edged or edgeless objects classified from the tactile images. In the second stage, the hybrid recognition scheme utilizes the feature set comprising both the previously obtained regional descriptors features and some gradient-related information from the reconstructed object-shape image for the final recognition in corresponding four classes of objects viz. planar, one-edged object, two-edged object and cylindrical objects. The hybrid strategy achieves 97.62 % classification accuracy, while the conventional recognition scheme reaches only to 92.60 %. Moreover, the proposed algorithm has been proved to be less noise prone and more statistically robust. PMID:24469960

  11. On 3D simulation of moving objects in a digital earth system

    2008-01-01

    "How do the rescue helicopters find out an optimized path to arrive at the site of a disaster as soon as possible?" or "How are the flight procedures over mountains and plateaus simulated?" and so on.In this paper a script language on spatial moving objects is presented by abstracting 3D spatial moving objects’ behavior when implementing moving objects simulation in 3D digital Earth scene,which is based on a platform of digital China named "ChinaStar".The definition of this script language,its morphology and syntax,its compiling and mediate language generating,and the behavior and state control of spatial moving objects are discussed emphatically.In addition,the language’s applications and implementation are also discussed.

  12. Full-viewpoint 3D Space Object Recognition Based on Kernel Locality Preserving Projections

    Meng Gang; Jiang Zhiguo; Liu Zhengyi; Zhang Haopeng; Zhao Danpei

    2010-01-01

    Space object recognition plays an important role in spatial exploitation and surveillance,followed by two main problems:lacking of data and drastic changes in viewpoints.In this article,firstly,we build a three-dimensional (3D) satellites dataset named BUAA Satellite Image Dataset (BUAA-SID 1.0) to supply data for 3D space object research.Then,based on the dataset,we propose to recognize full-viewpoint 3D space objects based on kemel locality preserving projections (KLPP).To obtain more accurate and separable description of the objects,firstly,we build feature vectors employing moment invariants,Fourier descriptors,region covariance and histogram of oriented gradients.Then,we map the features into kernel space followed by dimensionality reduction using KLPP to obtain the submanifold of the features.At last,k-nearest neighbor (kNN) is used to accomplish the classification.Experimental results show that the proposed approach is more appropriate for space object recognition mainly considering changes of viewpoints.Encouraging recognition rate could be obtained based on images in BUAA-SID 1.0,and the highest recognition result could achieve 95.87%.

  13. From 2D Silhouettes to 3D Object Retrieval: Contributions and Benchmarking

    Napoléon Thibault

    2010-01-01

    Full Text Available 3D retrieval has recently emerged as an important boost for 2D search techniques. This is mainly due to its several complementary aspects, for instance, enriching views in 2D image datasets, overcoming occlusion and serving in many real-world applications such as photography, art, archeology, and geolocalization. In this paper, we introduce a complete "2D photography to 3D object" retrieval framework. Given a (collection of picture(s or sketch(es of the same scene or object, the method allows us to retrieve the underlying similar objects in a database of 3D models. The contribution of our method includes (i a generative approach for alignment able to find canonical views consistently through scenes/objects and (ii the application of an efficient but effective matching method used for ranking. The results are reported through the Princeton Shape Benchmark and the Shrec benchmarking consortium evaluated/compared by a third party. In the two gallery sets, our framework achieves very encouraging performance and outperforms the other runs.

  14. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  15. Towards a Stable Robotic Object Manipulation Through 2D-3D Features Tracking

    Sorin M. Grigorescu

    2013-04-01

    Full Text Available In this paper, a new object tracking system is proposed to improve the object manipulation capabilities of service robots. The goal is to continuously track the state of the visualized environment in order to send visual information in real time to the path planning and decision modules of the robot; that is, to adapt the movement of the robotic system according to the state variations appearing in the imaged scene. The tracking approach is based on a probabilistic collaborative tracking framework developed around a 2D patch‐based tracking system and a 2D‐3D point features tracker. The real‐time visual information is composed of RGB‐D data streams acquired from state‐of‐the‐art structured light sensors. For performance evaluation, the accuracy of the developed tracker is compared to a traditional marker‐based tracking system which delivers 3D information with respect to the position of the marker.

  16. Computation of Edge-Edge-Edge Events Based on Conicoid Theory for 3-D Object Recognition

    WU Chenye; MA Huimin

    2009-01-01

    The availability of a good viewpoint space partition is crucial in three dimensional (3-D) object rec-ognition on the approach of aspect graph. There are two important events depicted by the aspect graph ap-proach, edge-edge-edge (EEE) events and edge-vertex (EV) events. This paper presents an algorithm to compute EEE events by characteristic analysis based on conicoid theory, in contrast to current algorithms that focus too much on EV events and often overlook the importance of EEE events. Also, the paper provides a standard flowchart for the viewpoint space partitioning based on aspect graph theory that makes it suitable for perspective models. The partitioning result best demonstrates the algorithm's efficiency with more valu-able viewpoints found with the help of EEE events, which can definitely help to achieve high recognition rate for 3-D object recognition.

  17. Local shape feature fusion for improved matching, pose estimation and 3D object recognition

    Buch, Anders Glent; Petersen, Henrik Gordon; Krüger, Norbert

    2016-01-01

    We provide new insights to the problem of shape feature description and matching, techniques that are often applied within 3D object recognition pipelines. We subject several state of the art features to systematic evaluations based on multiple datasets from different sources in a uniform manner...... several feature matches with a limited processing overhead. Our fused feature matches provide a significant increase in matching accuracy, which is consistent over all tested datasets. Finally, we benchmark all features in a 3D object recognition setting, providing further evidence of the advantage of....... We have carefully prepared and performed a neutral test on the datasets for which the descriptors have shown good recognition performance. Our results expose an important fallacy of previous results, namely that the performance of the recognition system does not correlate well with the performance of...

  18. Interactive Application Development Policy Object 3D Virtual Tour History Pacitan District based Multimedia

    Bambang Eka Purnama; Lies Yulianto; Muga Linggar Famukhit; Maryono

    2013-01-01

    Pacitan has a wide range of tourism activity. One of the tourism district is Pacitan historical attractions. These objects have a history tour of the educational values, history and culture, which must be maintained and preserved as one tourism asset Kabupeten Pacitan. But the history of the current tour the rarely visited and some of the students also do not understand the history of each of these historical attractions. Hence made a information media of 3D virtual interactive applications P...

  19. Architectural Reconstruction of 3D Building Objects through Semantic Knowledge Management

    Yucong, Duan; Cruz, Christophe; Nicolle, Christophe

    2010-01-01

    International audience This paper presents an ongoing research which aims at combining geometrical analysis of point clouds and semantic rules to detect 3D building objects. Firstly by applying a previous semantic formalization investigation, we propose a classification of related knowledge as definition, partial knowledge and ambiguous knowledge to facilitate the understanding and design. Secondly an empirical implementation is conducted on a simplified building prototype complying with t...

  20. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  1. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  2. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  3. A methodology for 3D modeling and visualization of geological objects

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to organize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and it is a good way to solve the problem of integration and share of geological spatial data.

  4. A Global Hypothesis Verification Framework for 3D Object Recognition in Clutter.

    Aldoma, Aitor; Tombari, Federico; Stefano, Luigi Di; Vincze, Markus

    2016-07-01

    Pipelines to recognize 3D objects despite clutter and occlusions usually end up with a final verification stage whereby recognition hypotheses are validated or dismissed based on how well they explain sensor measurements. Unlike previous work, we propose a Global Hypothesis Verification (GHV) approach which regards all hypotheses jointly so as to account for mutual interactions. GHV provides a principled framework to tackle the complexity of our visual world by leveraging on a plurality of recognition paradigms and cues. Accordingly, we present a 3D object recognition pipeline deploying both global and local 3D features as well as shape and color. Thereby, and facilitated by the robustness of the verification process, diverse object hypotheses can be gathered and weak hypotheses need not be suppressed too early to trade sensitivity for specificity. Experiments demonstrate the effectiveness of our proposal, which significantly improves over the state-of-art and attains ideal performance (no false negatives, no false positives) on three out of the six most relevant and challenging benchmark datasets. PMID:26485476

  5. A methodology for 3D modeling and visualization of geological objects

    ZHANG LiQiang; TAN YuMin; KANG ZhiZhong; RUI XiaoPing; ZHAO YuanYuan; LIU Liu

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to or- ganize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and It is a good way to solve the problem of integration and share of geological spatial data.

  6. Color and size interactions in a real 3D object similarity task.

    Ling, Yazhu; Hurlbert, Anya

    2004-08-31

    In the natural world, objects are characterized by a variety of attributes, including color and shape. The contributions of these two attributes to object recognition are typically studied independently of each other, yet they are likely to interact in natural tasks. Here we examine whether color and size (a component of shape) interact in a real three-dimensional (3D) object similarity task, using solid domelike objects whose distinct apparent surface colors are independently controlled via spatially restricted illumination from a data projector hidden to the observer. The novel experimental setup preserves natural cues to 3D shape from shading, binocular disparity, motion parallax, and surface texture cues, while also providing the flexibility and ease of computer control. Observers performed three distinct tasks: two unimodal discrimination tasks, and an object similarity task. Depending on the task, the observer was instructed to select the indicated alternative object which was "bigger than," "the same color as," or "most similar to" the designated reference object, all of which varied in both size and color between trials. For both unimodal discrimination tasks, discrimination thresholds for the tested attribute (e.g., color) were increased by differences in the secondary attribute (e.g., size), although this effect was more robust in the color task. For the unimodal size-discrimination task, the strongest effects of the secondary attribute (color) occurred as a perceptual bias, which we call the "saturation-size effect": Objects with more saturated colors appear larger than objects with less saturated colors. In the object similarity task, discrimination thresholds for color or size differences were significantly larger than in the unimodal discrimination tasks. We conclude that color and size interact in determining object similarity, and are effectively analyzed on a coarser scale, due to noise in the similarity estimates of the individual attributes

  7. Development of a system for 3D reconstruction of objects using passive computer vision methods

    Gec, Sandi

    2015-01-01

    The main goal of the master thesis is to develop a system for reconstruction of 3D objects from colour images. The main focus is on passive computer vision methods from which we select two, i.e., Stereo vision and Space carving. Both methods require information about camera poses. The camera pose for a given image is estimated from the information obtained by detecting a reference object, i.e., a standard A4 paper sheet. We develop an Android based mobile application to guide a user during im...

  8. 3D high- and super-resolution imaging using single-objective SPIM.

    Galland, Remi; Grenci, Gianluca; Aravind, Ajay; Viasnoff, Virgile; Studer, Vincent; Sibarita, Jean-Baptiste

    2015-07-01

    Single-objective selective-plane illumination microscopy (soSPIM) is achieved with micromirrored cavities combined with a laser beam-steering unit installed on a standard inverted microscope. The illumination and detection are done through the same objective. soSPIM can be used with standard sample preparations and features high background rejection and efficient photon collection, allowing for 3D single-molecule-based super-resolution imaging of whole cells or cell aggregates. Using larger mirrors enabled us to broaden the capabilities of our system to image Drosophila embryos. PMID:25961414

  9. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automati...

  10. Creating of 3D map of temperature fields OKR at depths of around 1000 m

    Kajzar, Vlastimil; Pavelek, Z.

    Vol. 5. Ostrava: Ústav geoniky AV ČR, 2014 - (Koníček, P.; Souček, K.; Heroldová, N.). s. 91-92 ISBN 978-80-86407-49-4. [5th International Colloquium on Geomechanics and Geophysics. 24.06.2014-27.06.2014, Ostravice, Karolínka] Institutional support: RVO:68145535 Keywords : temperature field * rock massif * OKR * exploration * 3D map Subject RIV: DH - Mining, incl. Coal Mining

  11. Indoor 3D Video Monitoring Using Multiple Kinect Depth-Cameras

    M. Martínez-Zarzuela

    2014-02-01

    Full Text Available This article describes the design and development of a system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in selfcoding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to merge the skeletons detected locally by each Kinect conveniently, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  12. Fast and flexible 3D object recognition solutions for machine vision applications

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  13. Volumetric Next-best-view Planning for 3D Object Reconstruction with Positioning Error

    J. Irving Vasquez-Gomez

    2014-10-01

    Full Text Available Three-dimensional (3D object reconstruction is the process of building a 3D model of a real object. This task is performed by taking several scans of an object from different locations (views. Due to the limited field of view of the sensor and the object’s self-occlusions, it is a difficult problem to solve. In addition, sensor positioning by robots is not perfect, making the actual view different from the expected one. We propose a next best view (NBV algorithm that determines each view to reconstruct an arbitrary object. Furthermore, we propose a method to deal with the uncertainty in sensor positioning. The algorithm fulfills all the constraints of a reconstruction process, such as new information, positioning constraints, sensing constraints and registration constraints. Moreover, it improves the scan’s quality and reduces the navigation distance. The algorithm is based on a search-based paradigm where a set of candidate views is generated and then each candidate view is evaluated to determine which one is the best. To deal with positioning uncertainty, we propose a second stage which re-evaluates the views according to their neighbours, such that the best view is that which is within a region of the good views. The results of simulation and comparisons with previous approaches are presented.

  14. Recognition of 3-D objects based on Markov random field models

    HUANG Ying; DING Xiao-qing; WANG Sheng-jin

    2006-01-01

    The recognition of 3-D objects is quite a difficult task for computer vision systems.This paper presents a new object framework,which utilizes densely sampled grids with different resolutions to represent the local information of the input image.A Markov random field model is then created to model the geometric distribution of the object key nodes.Flexible matching,which aims to find the accurate correspondence map between the key points of two images,is performed by combining the local similarities and the geometric relations together using the highest confidence first method.Afterwards,a global similarity is calculated for object recognition. Experimental results on Coil-100 object database,which consists of 7 200 images of 100 objects,are presented.When the numbers of templates vary from 4,8,18 to 36 for each object,and the remaining images compose the test sets,the object recognition rates are 95.75 %,99.30 %,100.0 % and 100.0 %,respectively.The excellent recognition performance is much better than those of the other cited references,which indicates that our approach is well-suited for appearance-based object recognition.

  15. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  16. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  17. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  18. Polarizablity of 2D and 3D conducting objects using method of moments

    Shahpari, Morteza; Lewis, Andrew

    2014-01-01

    Fundamental antenna limits of the gain-bandwidth product are derived from polarizability calculations. This electrostatic technique has significant value in many antenna evaluations. Polarizability is not available in closed form for most antenna shapes and no commercial electromagnetic packages have this facility. Numerical computation of the polarizability for arbitrary conducting bodies was undertaken using an unstructured triangular mesh over the surface of 2D and 3D objects. Numerical results compare favourably with analytical solutions and can be implemented efficiently for large structures of arbitrary shape.

  19. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000. (paper)

  20. Calibration target reconstruction for 3-D vision inspection system of large-scale engineering objects

    Yin, Yongkai; Peng, Xiang; Guan, Yingjian; Liu, Xiaoli; Li, Ameng

    2010-11-01

    It is usually difficult to calibrate the 3-D vision inspection system that may be employed to measure the large-scale engineering objects. One of the challenges is how to in-situ build-up a large and precise calibration target. In this paper, we present a calibration target reconstruction strategy to solve such a problem. First, we choose one of the engineering objects to be inspected as a calibration target, on which we paste coded marks on the object surface. Next, we locate and decode marks to get homologous points. From multiple camera images, the fundamental matrix between adjacent images can be estimated, and then the essential matrix can be derived with priori known camera intrinsic parameters and decomposed to obtain camera extrinsic parameters. Finally, we are able to obtain the initial 3D coordinates with binocular stereo vision reconstruction, and then optimize them with the bundle adjustment by considering the lens distortions, leading to a high-precision calibration target. This reconstruction strategy has been applied to the inspection of an industrial project, from which the proposed method is successfully validated.

  1. 3D Objects Localization Using Fuzzy Approach and Hierarchical Belief Propagation: Application at Level Crossings

    Dufaux A

    2011-01-01

    Full Text Available Technological solutions for obstacle-detection systems have been proposed to prevent accidents in safety-transport applications. In order to avoid the limits of these proposed technologies, an obstacle-detection system utilizing stereo cameras is proposed to detect and localize multiple objects at level crossings. Background subtraction is first performed using the color independent component analysis technique, which has proved its performance against other well-known object-detection methods. The main contribution is the development of a robust stereo-matching algorithm which reliably localizes in 3D each segmented object. A standard stereo dataset and real-world images are used to test and evaluate the performances of the proposed algorithm to prove the efficiency and the robustness of the proposed video-surveillance system.

  2. Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens.

    Wang, Yu-Jen; Shen, Xin; Lin, Yi-Hsin; Javidi, Bahram

    2015-08-01

    Conventional synthetic-aperture integral imaging uses a lens array to sense the three-dimensional (3D) object or scene that can then be reconstructed digitally or optically. However, integral imaging generally suffers from a fixed and limited range of depth of field (DOF). In this Letter, we experimentally demonstrate a 3D integral-imaging endoscopy with tunable DOF by using a single large-aperture focal-length-tunable liquid crystal (LC) lens. The proposed system can provide high spatial resolution and an extended DOF in synthetic-aperture integral imaging 3D endoscope. In our experiments, the image plane in the integral imaging pickup process can be tuned from 18 to 38 mm continuously using a large-aperture LC lens, and the total DOF is extended from 12 to 51 mm. To the best of our knowledge, this is the first report on synthetic aperture integral imaging 3D endoscopy with a large-aperture LC lens that can provide high spatial resolution 3D imaging with an extend DOF. PMID:26258358

  3. 3D Skeleton model derived from Kinect Depth Sensor Camera and its application to walking style quality evaluations

    Kohei Arai

    2013-07-01

    Full Text Available Feature extraction for gait recognition has been created widely. The ancestor for this task is divided into two parts, model based and free-model based. Model-based approaches obtain a set of static or dynamic skeleton parameters via modeling or tracking body components such as limbs, legs, arms and thighs. Model-free approaches focus on shapes of silhouettes or the entire movement of physical bodies. Model-free approaches are insensitive to the quality of silhouettes. Its advantage is a low computational costs comparing to model-based approaches. However, they are usually not robust to viewpoints and scale. Imaging technology also developed quickly this decades. Motion capture (mocap device integrated with motion sensor has an expensive price and can only be owned by big animation studio. Fortunately now already existed Kinect camera equipped with depth sensor image in the market with very low price compare to any mocap device. Of course the accuracy not as good as the expensive one, but using some preprocessing we can remove the jittery and noisy in the 3D skeleton points. Our proposed method is part of model based feature extraction and we call it 3D Skeleton model. 3D skeleton model for extracting gait itself is a new model style considering all the previous model is using 2D skeleton model. The advantages itself is getting accurate coordinate of 3D point for each skeleton model rather than only 2D point. We use Kinect to get the depth data. We use Ipisoft mocap software to extract 3d skeleton model from Kinect video. From the experimental results shows 86.36% correctly classified instances using SVM.

  4. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used

  5. Lapse-time dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-07-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: firstly, we evaluate the contribution of surface and body wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Secondly, we compare the lapse-time behavior in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  6. Thickness and clearance visualization based on distance field of 3D objects

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  7. Fabrication of 3D Templates Using a Large Depth of Focus Femtosecond Laser

    Li, Xiao-Fan; Winfield, Richard; O'Brien, Shane; Chen, Liang-Yao

    2009-09-01

    We report the use of a large depth of focus Bessel beam in the fabrication of cell structures. Two axicon lenses are investigated in the formation of high aspect ratio line structures. A sol-gel resin, with good mechanical strength, is polymerised in a modified two-photon polymerisation system. Examples of different two-dimensional grids are presented to show that the lateral resolution can be maintained even in the rapid fabrication of high-sided structures.

  8. Fabrication of 3D Templates Using a Large Depth of Focus Femtosecond Laser

    We report the use of a large depth of focus Bessel beam in the fabrication of cell structures. Two axicon lenses are investigated in the formation of high aspect ratio line structures. A sol-gel resin, with good mechanical strength, is polymerised in a modified two-photon polymerisation system. Examples of different two-dimensional grids are presented to show that the lateral resolution can be maintained even in the rapid fabrication of high-sided structures

  9. Active learning in the lecture theatre using 3D printed objects.

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  10. Active learning in the lecture theatre using 3D printed objects [version 2; referees: 2 approved

    David P. Smith

    2016-06-01

    Full Text Available The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student.

  11. Fully integrated system-on-chip for pixel-based 3D depth and scene mapping

    Popp, Martin; De Coi, Beat; Thalmann, Markus; Gancarz, Radoslav; Ferrat, Pascal; Dürmüller, Martin; Britt, Florian; Annese, Marco; Ledergerber, Markus; Catregn, Gion-Pol

    2012-03-01

    We present for the first time a fully integrated system-on-chip (SoC) for pixel-based 3D range detection suited for commercial applications. It is based on the time-of-flight (ToF) principle, i.e. measuring the phase difference of a reflected pulse train. The product epc600 is fabricated using a dedicated process flow, called Espros Photonic CMOS. This integration makes it possible to achieve a Quantum Efficiency (QE) of >80% in the full wavelength band from 520nm up to 900nm as well as very high timing precision in the sub-ns range which is needed for exact detection of the phase delay. The SoC features 8x8 pixels and includes all necessary sub-components such as ToF pixel array, voltage generation and regulation, non-volatile memory for configuration, LED driver for active illumination, digital SPI interface for easy communication, column based 12bit ADC converters, PLL and digital data processing with temporary data storage. The system can be operated at up to 100 frames per second.

  12. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    Feng Xu

    Full Text Available Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies.

  13. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Péter; Baumbach, Tilo; Cloetens, Peter

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies. PMID:23185554

  14. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  15. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  16. Interactive Application Development Policy Object 3D Virtual Tour History Pacitan District based Multimedia

    Bambang Eka Purnama

    2013-04-01

    Full Text Available Pacitan has a wide range of tourism activity. One of the tourism district is Pacitan historical attractions. These objects have a history tour of the educational values, history and culture, which must be maintained and preserved as one tourism asset Kabupeten Pacitan. But the history of the current tour the rarely visited and some of the students also do not understand the history of each of these historical attractions. Hence made a information media of 3D virtual interactive applications Pacitan tour history in the form of interactive CD applications. The purpose of the creation of interactive applications is to introduce Pacitan history tours to students and the community. Creating interactive information media that can provide an overview of the history of the existing tourist sites in Pacitan The benefits of this research is the students and the public will get to know the history of historical attractions Pacitan. As a media introduction of historical attractions and as a medium of information to preserve the historical sights. Band is used in the manufacturing methods Applications 3D Virtual Interactive Attractions: History-Based Multimedia Pacitan authors used the method library, observation and interviews. Design using 3ds Max 2010, Adobe Director 11.5, Adobe Photoshop CS3 and Corel Draw. The results of this research is the creation of media interakif information that can provide knowledge about the history of Pacitan.

  17. Electromagnetic 3D subsurface imaging with source sparsity for a synthetic object

    Pursiainen, Sampsa

    2016-01-01

    This paper concerns electromagnetic 3D subsurface imaging in connection with sparsity of signal sources. We explored an imaging approach that can be implemented in situations that allow obtaining a large amount of data over a surface or a set of orbits but at the same time require sparsity of the signal sources. Characteristic to such a tomography scenario is that it necessitates the inversion technique to be genuinely three-dimensional: For example, slicing is not possible due to the low number of sources. Here, we primarily focused on astrophysical subsurface exploration purposes. As an example target of our numerical experiments we used a synthetic small planetary object containing three inclusions, e.g. voids, of the size of the wavelength. A tetrahedral arrangement of source positions was used, it being the simplest symmetric point configuration in 3D. Our results suggest that somewhat reliable inversion results can be produced within the present a priori assumptions, if the data can be recorded at a spe...

  18. An alternative 3D inversion method for magnetic anomalies with depth resolution

    M. Chiappini

    2006-06-01

    Full Text Available This paper presents a new method to invert magnetic anomaly data in a variety of non-complex contexts when a priori information about the sources is not available. The region containing magnetic sources is discretized into a set of homogeneously magnetized rectangular prisms, polarized along a common direction. The magnetization distribution is calculated by solving an underdetermined linear system, and is accomplished through the simultaneous minimization of the norm of the solution and the misfit between the observed and the calculated field. Our algorithm makes use of a dipolar approximation to compute the magnetic field of the rectangular blocks. We show how this approximation, in conjunction with other correction factors, presents numerous advantages in terms of computing speed and depth resolution, and does not affect significantly the success of the inversion. The algorithm is tested on both synthetic and real magnetic datasets.

  19. An overview of 3D topology for LADM-based objects

    Zulkifli, N.A.; Rahman, A.A.; Van Oosterom, P.J.M.

    2015-01-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological mod

  20. WAVES GENERATED BY A 3D MOVING BODY IN A TWO-LAYER FLUID OF FINITE DEPTH

    ZHU Wei; YOU Yun-xiang; MIAO Guo-ping; ZHAO Feng; ZHANG Jun

    2005-01-01

    This paper is concerned with the waves generated by a 3-D body advancing beneath the free surface with constant speed in a two-layer fluid of finite depth. By applying Green's theorem, a layered integral equation system based on the Rankine source for the perturbed velocity potential generated by the moving body was derived with the potential flow theory. A four-node isoparametric element method was used to treat with the solution of the layered integral equation system. The surface and interface waves generated by a moving ball were calculated numerically. The results were compared with the analytical results for a moving source with constant velocity.

  1. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate th...

  2. A 3D approach for object recognition in illuminated scenes with adaptive correlation filters

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2015-09-01

    In this paper we solve the problem of pose recognition of a 3D object in non-uniformly illuminated and noisy scenes. The recognition system employs a bank of space-variant correlation filters constructed with an adaptive approach based on local statistical parameters of the input scene. The position and orientation of the target are estimated with the help of the filter bank. For an observed input frame, the algorithm computes the correlation process between the observed image and the bank of filters using a combination of data and task parallelism by taking advantage of a graphics processing unit (GPU) architecture. The pose of the target is estimated by finding the template that better matches the current view of target within the scene. The performance of the proposed system is evaluated in terms of recognition accuracy, location and orientation errors, and computational performance.

  3. Ball-scale based hierarchical multi-object recognition in 3D medical images

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  4. Depth-kymography of vocal fold vibrations: part II. Simulations and direct comparisons with 3D profile measurements

    Mul, Frits F M de; George, Nibu A; Qiu Qingjun; Rakhorst, Gerhard; Schutte, Harm K [Department of Biomedical Engineering BMSA, Faculty of Medicine, University Medical Center Groningen UMCG, University of Groningen, PO Box 196, 9700 AD Groningen (Netherlands)], E-mail: ffm@demul.net

    2009-07-07

    We report novel direct quantitative comparisons between 3D profiling measurements and simulations of human vocal fold vibrations. Until now, in human vocal folds research, only imaging in a horizontal plane was possible. However, for the investigation of several diseases, depth information is needed, especially when the two folds act differently, e.g. in the case of tumour growth. Recently, with our novel depth-kymographic laryngoscope, we obtained calibrated data about the horizontal and vertical positions of the visible surface of the vibrating vocal folds. In order to find relations with physical parameters such as elasticity and damping constants, we numerically simulated the horizontal and vertical positions and movements of the human vocal folds while vibrating and investigated the effect of varying several parameters on the characteristics of the phonation: the masses and their dimensions, the respective forces and pressures, and the details of the vocal tract compartments. Direct one-to-one comparison with measured 3D positions presents-for the first time-a direct means of validation of these calculations. This may start a new field in vocal folds research.

  5. Depth-kymography of vocal fold vibrations: part II. Simulations and direct comparisons with 3D profile measurements

    We report novel direct quantitative comparisons between 3D profiling measurements and simulations of human vocal fold vibrations. Until now, in human vocal folds research, only imaging in a horizontal plane was possible. However, for the investigation of several diseases, depth information is needed, especially when the two folds act differently, e.g. in the case of tumour growth. Recently, with our novel depth-kymographic laryngoscope, we obtained calibrated data about the horizontal and vertical positions of the visible surface of the vibrating vocal folds. In order to find relations with physical parameters such as elasticity and damping constants, we numerically simulated the horizontal and vertical positions and movements of the human vocal folds while vibrating and investigated the effect of varying several parameters on the characteristics of the phonation: the masses and their dimensions, the respective forces and pressures, and the details of the vocal tract compartments. Direct one-to-one comparison with measured 3D positions presents-for the first time-a direct means of validation of these calculations. This may start a new field in vocal folds research.

  6. Development of confocal 3D X-ray fluorescence instrument and its applications to micro depth profiling

    We have developed a confocal micro X-ray fluorescence instrument. Two independent X-ray tubes of Mo and Cr targets were installed to this instrument. Two polycapillary full X-ray lenses were attached to two X-ray tubes, and a polycapillary half X-ray lens was also attached to the X-ray detector (silicon drift detector, SDD). Finally, three focus spots of three lenses were adjusted at a common position. By using this confocal micro X-ray fluorescence instrument, depth profiling for layered samples were performed. It was found that depth resolution depended on energy of X-ray fluorescence that was measured. In addition, X-ray elemental maps were determined at different depths for an agar sample including metal fragments of Cu, Ti and Au. The elemental maps showed actual distributions of metal fragments in the agar, indicating that the confocal micro X-ray fluorescence is a feasible technique for non-destructive depth analysis and 3D X-ray fluorescence analysis. (author)

  7. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    Shouyi Yin; Hao Dong; Guangli Jiang; Leibo Liu; Shaojun Wei

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor....

  8. 3D Micro-PIXE at atmospheric pressure: A new tool for the investigation of art and archaeological objects

    The paper describes a novel experiment characterized by the development of a confocal geometry in an external Micro-PIXE set-up. The position of X-ray optics in front of the X-ray detector and its proper alignment with respect to the proton micro-beam focus provided the possibility of carrying out 3D Micro-PIXE analysis. As a first application, depth intensity profiles of the major elements that compose the patina layer of a quaternary bronze alloy were measured. A simulation approach of the 3D Micro-PIXE data deduced elemental concentration profiles in rather good agreement with corresponding results obtained by electron probe micro-analysis from a cross-sectioned patina sample. With its non-destructive and depth-resolving properties, as well as its feasibility in atmospheric pressure, 3D Micro-PIXE seems especially suited for investigations in the field of cultural heritage

  9. Software for Building Models of 3D Objects via the Internet

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  10. mEdgeBoxes: objectness estimation for depth image

    Fang, Zhiwen; Cao, Zhiguo; Xiao, Yang; Zhu, Lei; Lu, Hao

    2015-12-01

    Object detection is one of the most important researches in computer vision. Recently, category-independent objectness in RGB images has been a hot field for its generalization ability and efficiency as a pre-filtering procedure of the object detection. Many traditional applications have been transferred from the RGB images to the depth images since the economical depth sensors, such as Kinect, were popularized. The depth data represents the distance information. Because of the special characteristic, the methods of objectness evaluation in RGB images are often invalid in depth images. In this study, we propose mEdgeboxes to evaluate the objectness in depth image. Aside from detecting the edge from the raw depth information, we extract another edge map from the orientation information based on the normal vector. Two kinds of the edge map are integrated and are fed to Edgeboxes1 in order to produce the object proposals. The experimental results on two challenging datasets demonstrate that the detection rate of the proposed objectness estimation method can achieve over 90% with 1000 windows. It is worth noting that our approach generally outperforms the state-of-the-art methods on the detection rate.

  11. Nanometer depth resolution in 3D topographic analysis of drug-loaded nanofibrous mats without sample preparation.

    Paaver, Urve; Heinämäki, Jyrki; Kassamakov, Ivan; Hæggström, Edward; Ylitalo, Tuomo; Nolvi, Anton; Kozlova, Jekaterina; Laidmäe, Ivo; Kogermann, Karin; Veski, Peep

    2014-02-28

    We showed that scanning white light interferometry (SWLI) can provide nanometer depth resolution in 3D topographic analysis of electrospun drug-loaded nanofibrous mats without sample preparation. The method permits rapidly investigating geometric properties (e.g. fiber diameter, orientation and morphology) and surface topography of drug-loaded nanofibers and nanomats. Electrospun nanofibers of a model drug, piroxicam (PRX), and hydroxypropyl methylcellulose (HPMC) were imaged. Scanning electron microscopy (SEM) served as a reference method. SWLI 3D images featuring 29 nm by 29 nm active pixel size were obtained of a 55 μm × 40 μm area. The thickness of the drug-loaded non-woven nanomats was uniform, ranging from 2.0 μm to 3.0 μm (SWLI), and independent of the ratio between HPMC and PRX. The average diameters (n=100, SEM) for drug-loaded nanofibers were 387 ± 125 nm (HPMC and PRX 1:1), 407 ± 144 nm (HPMC and PRX 1:2), and 290 ± 100 nm (HPMC and PRX 1:4). We found advantages and limitations in both techniques. SWLI permits rapid non-contacting and non-destructive characterization of layer orientation, layer thickness, porosity, and surface morphology of electrospun drug-loaded nanofibers and nanomats. Such analysis is important because the surface topography affects the performance of nanomats in pharmaceutical and biomedical applications. PMID:24378328

  12. 3D models automatic reconstruction of selected close range objects. (Polish Title: Automatyczna rekonstrukcja modeli 3D małych obiektów bliskiego zasiegu)

    Zaweiska, D.

    2013-12-01

    Reconstruction of three-dimensional, realistic models of objects from digital images has been the topic of research in many areas of science for many years. This development is stimulated by new technologies and tools, which appeared recently, such as digital photography, laser scanners, increase in the equipment efficiency and Internet. The objective of this paper is to present results of automatic modeling of selected close range objects, with the use of digital photographs acquired by the Hasselblad H4D50 camera. The author's software tool was utilized for calculations; it performs successive stages of the 3D model creation. The modeling process was presented as the complete process which starts from acquisition of images and which is completed by creation of a photorealistic 3D model in the same software environment. Experiments were performed for selected close range objects, with appropriately arranged image geometry, creating a ring around the measured object. The Area Base Matching (CC/LSM) method, the RANSAC algorithm, with the use of tensor calculus, were utilized form automatic matching of points detected with the SUSAN algorithm. Reconstruction of the surface of model generation is one of the important stages of 3D modeling. Reconstruction of precise surfaces, performed on the basis of a non-organized cloud of points, acquired from automatic processing of digital images, is a difficult task, which has not been finally solved. Creation of poly-angular models, which may meet high requirements concerning modeling and visualization is required in many applications. The polynomial method is usually the best way to precise representation of measurement results, and, at the same time, to achieving the optimum description of the surface. Three algorithm were tested: the volumetric method (VCG), the Poisson method and the Ball pivoting method. Those methods are mostly applied to modeling of uniform grids of points. Results of experiments proved that incorrect

  13. 3D Visualization System for Tracking and Identification of Objects Project

    National Aeronautics and Space Administration — Photon-X has developed a proprietary EO spatial phase technology that can passively collect 3-D images in real-time using a single camera-based system. This...

  14. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-01

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique. PMID:25966320

  15. Design of an object oriented and modular architecture for a naval tactical simulator using Delta3D's game manager

    Toledo-Ramirez, Rommel

    2006-01-01

    The author proposes an architecture based on the Dynamic Actor Layer and the Game Manager in Delta3D to create a Networked Virtual Environment which could be used to train Navy Officers in tactics, allowing team training and doctrine rehearsal. The developed architecture is based on Object Oriented and Modular Design principles, while it explores the flexibility and strength of the Game Manager features in Delta3D game engine. The implementation of the proposed architecture is planned to be...

  16. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor v...

  17. Accurate object tracking system by integrating texture and depth cues

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  18. Digitising 3D surfaces of museum objects using photometric stereo-device

    Valach, Jaroslav; Vrba, David; Fíla, Tomáš; Bryscejn, Jan; Vavřík, Daniel

    Vol. 1. Dortmund: The LWL Industrial Museum Zeche Zollern, 2014 - (Bentkowska-Kafel, A.; Murphy, O.) ISSN 2409-9503. [From low-cost to high-tech. 3D-documentation in archaeology and monument preservation. Dortmund (DE), 16.10.2013-18.10.2013] R&D Projects: GA MK(CZ) DF11P01OVV001 Keywords : cultural heritage * 3D modelling * photometric stereo * surface topography documentation Subject RIV: AL - Art, Architecture, Cultural Heritage http://cosch.info/documents/10179/108557/2013_Denkmaeler+3D_Valach_Vrba_Fila+et+al.pdf/d7cf0a61-ddf4-41f4-a6d7-24fa172529c5

  19. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  20. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  1. Learning to Grasp Unknown Objects Based on 3D Edge Information

    Bodenhagen, Leon; Kraft, Dirk; Popovic, Mila;

    2010-01-01

    In this work we refine an initial grasping behavior based on 3D edge information by learning. Based on a set of autonomously generated evaluated grasps and relations between the semi-global 3D edges, a prediction function is learned that computes a likelihood for the success of a grasp using either...... an offline or an online learning scheme. Both methods are implemented using a hybrid artificial neural network containing standard nodes with a sigmoid activation function and nodes with a radial basis function. We show that a significant performance improvement can be achieved....

  2. A computational model that recovers the 3D shape of an object from a single 2D retinal representation.

    Li, Yunfeng; Pizlo, Zygmunt; Steinman, Robert M

    2009-05-01

    Human beings perceive 3D shapes veridically, but the underlying mechanisms remain unknown. The problem of producing veridical shape percepts is computationally difficult because the 3D shapes have to be recovered from 2D retinal images. This paper describes a new model, based on a regularization approach, that does this very well. It uses a new simplicity principle composed of four shape constraints: viz., symmetry, planarity, maximum compactness and minimum surface. Maximum compactness and minimum surface have never been used before. The model was tested with random symmetrical polyhedra. It recovered their 3D shapes from a single randomly-chosen 2D image. Neither learning, nor depth perception, was required. The effectiveness of the maximum compactness and the minimum surface constraints were measured by how well the aspect ratio of the 3D shapes was recovered. These constraints were effective; they recovered the aspect ratio of the 3D shapes very well. Aspect ratios recovered by the model were compared to aspect ratios adjusted by four human observers. They also adjusted aspect ratios very well. In those rare cases, in which the human observers showed large errors in adjusted aspect ratios, their errors were very similar to the errors made by the model. PMID:18621410

  3. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    Javier Contreras

    2015-11-01

    Full Text Available A MATLAB/SIMULINK software simulation model (structure and component blocks has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  4. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    Feng Xu; Lukas Helfen; Heikki Suhonen; Dan Elgrabli; Sam Bayat; Péter Reischig; Tilo Baumbach; Peter Cloetens

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome...

  5. Hands-free Evolution of 3D-printable Objects via Eye Tracking

    Cheney, Nick; Clune, Jeff; Yosinski, Jason; Lipson, Hod

    2013-01-01

    Interactive evolution has shown the potential to create amazing and complex forms in both 2-D and 3-D settings. However, the algorithm is slow and users quickly become fatigued. We propose that the use of eye tracking for interactive evolution systems will both reduce user fatigue and improve evolutionary success. We describe a systematic method for testing the hypothesis that eye tracking driven interactive evolution will be a more successful and easier-to-use design method than traditional ...

  6. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Peter; Baumbach, Tilo; Cloetens, Peter

    2014-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome...

  7. Real time moving object detection using motor signal and depth map for robot car

    Wu, Hao; Siu, Wan-Chi

    2013-12-01

    Moving object detection from a moving camera is a fundamental task in many applications. For the moving robot car vision, the background movement is 3D motion structure in nature. In this situation, the conventional moving object detection algorithm cannot be use to handle the 3D background modeling effectively and efficiently. In this paper, a novel scheme is proposed by utilizing the motor control signal and depth map obtained from a stereo camera to model the perspective transform matrix between different frames under a moving camera. In our approach, the coordinate relationship between frames during camera moving is modeled by a perspective transform matrix which is obtained by using current motor control signals and the pixel depth value. Hence, the relationship between a static background pixel and the moving foreground corresponding to the camera motion can be related by a perspective matrix. To enhance the robustness of classification, we allowed a tolerance range during the perspective transform matrix prediction and used multi-reference frames to classify the pixel on current frame. The proposed scheme has been found to be able to detect moving objects for our moving robot car efficiently. Different from conventional approaches, our method can model the moving background in 3D structure, without online model training. More importantly, the computational complexity and memory requirement are low making it possible to implement this scheme in real-time, which is even valuable for a robot vision system.

  8. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  9. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    Maurizio Muzzupappa

    2013-08-01

    Full Text Available In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.

  10. Object data mining and analysis on 3D images of high precision industrial CT

    There are some areas of interest on 3D images of the high precision industrial CT, such as defects caused during the production process. In order to take a close analysis of these areas, the image processing software Amira was used on the data of a particular work piece sample to do defects segmentation and display, defects measurement. evaluation and documentation. A data set obtained by scanning a vise sample using the lab CT system was analyzed and the results turn out to be fairly good. (authors)

  11. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  12. Eccentricity in Images of Circular and Spherical Targets and its Impact to 3D Object Reconstruction

    Luhmann, T.

    2014-06-01

    This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.

  13. 3D phase micro-object studies by means of digital holographic tomography supported by algebraic reconstruction technique

    Bilski, B. J.; Jozwicka, A.; Kujawinska, M.

    2007-09-01

    Constant development of microelements' technology requires a creation of new instruments to determine their basic physical parameters in 3D. The most efficient non-destructive method providing 3D information is tomography. In this paper we present Digital Holographic Tomography (DHT), in which input data is provided by means of Di-git- al Holography (DH). The main advantage of DH is the capability to capture several projections with a single hologram [1]. However, these projections have uneven angular distribution and their number is significantly limited. Therefore - Algebraic Reconstruction Technique (ART), where a few phase projections may be sufficient for proper 3D phase reconstruction, is implemented. The error analysis of the method and its additional limitations due to shape and dimensions of investigated object are presented. Finally, the results of ART application to DHT method are also presented on data reconstructed from numerically generated hologram of a multimode fibre.

  14. A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision

    Zwinderman, Matthijs; Rybski, Paul E.; Kootstra, Gert

    2010-01-01

    In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views...

  15. Technology of 3D map creation for 'Ukrytie' object internal premises

    The results of creation of master cells of information technology for mapping of 'Ukryttia' object internal rooms are represented according to materials of digital stereo and photogrammetric processing of shootings results. It is shown that a highly enough accuracy of mutual orientation of snapshots and recovery of separate objects of 'Ukryttia' object rooms is reached. Mean relative error in defining spatial sizes of objects made up 6%. A principle possibility of using offered technology in practical mapping of 'Ukryttia' object rooms is demonstrated. The results of map creation due to proposed technology can be presented as three-dimensional models in AutoCad system for subsequent use

  16. Model-based recognition of 3-D objects by geometric hashing technique

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  17. VIRO 3D: fast three-dimensional full-body scanning for humans and other living objects

    Stein, Norbert; Minge, Bernhard

    1998-03-01

    The development of a family of partial and whole body scanners provides a complete technology for fully three-dimensional and contact-free scans on human bodies or other living objects within seconds. This paper gives insight into the design and the functional principles of the whole body scanner VIRO 3D operating on the basis of the laser split-beam method. The arrangement of up to 24 camera/laser combinations, thus dividing the area into different camera fields and an all- around sensor configuration travelling in vertical direction allow the complete 360-degree-scan of an object within 6 - 20 seconds. Due to a special calibration process the different sensors are matched and the measured data are combined. Up to 10 million 3D measuring points with a resolution of approximately 1 mm are processed in all coordinate axes to generate a 3D model. By means of high-performance processors in combination with real-time image processing chips the image data from almost any number of sensors can be recorded and evaluated synchronously in video real-time. VIRO 3D scanning systems have already been successfully implemented in various applications and will open up new perspectives in different other fields, ranging from industry, orthopaedic medicine, plastic surgery to art and photography.

  18. Representing Objects using Global 3D Relational Features for Recognition Tasks

    Mustafa, Wail

    2015-01-01

    In robotic systems, visual interpretations of the environment compose an essential element in a variety of applications, especially those involving manipulation of objects. Interpreting the environment is often done in terms of recognition of objects using machine learning approaches. For user...... representations. For representing objects, we derive global descriptors encoding shape using viewpoint-invariant features obtained from multiple sensors observing the scene. Objects are also described using color independently. This allows for combining color and shape when it is required for the task. For more...... to initiate higher-level semantic interpretations of complex scenes. In the object category recognition task, we present a system that is capable of assigning multiple and nested categories for novel objects using a method developed for this purpose. Integrating this method with other multi-label learning...

  19. Automatic 3D Object Segmentation in Multiple Views using Volumetric Graph-Cuts

    Campbell, N. D. F.; Vogiatzis, G.; Hernández, C.; Cipolla, R.

    2007-01-01

    We propose an algorithm for automatically obtaining a segmentation of a rigid object in a sequence of images that are calibrated for camera pose and intrinsic parameters. Until recently, the best segmentation results have been obtained by interactive methods that require manual labelling of image regions. Our method requires no user input but instead relies on the camera fixating on the object of interest during the sequence. We begin by learning a model of the object is colour, from the imag...

  20. Holographic microscopy reconstruction in both object and image half spaces with undistorted 3D grid

    Verrier, Nicolas; Tessier, Gilles; Gross, Michel

    2015-01-01

    We propose a holographic microscopy reconstruction method, which propagates the hologram, in the object half space, in the vicinity of the object. The calibration yields reconstructions with an undistorted reconstruction grid i.e. with orthogonal x, y and z axis and constant pixels pitch. The method is validated with an USAF target imaged by a x60 microscope objective, whose holograms are recorded and reconstructed for different USAF locations along the longitudinal axis:-75 to +75 {\\mu}m. Since the reconstruction numerical phase mask, the reference phase curvature and MO form an afocal device, the reconstruction can be interpreted as occurring equivalently in the object or in image half space.

  1. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent;

    2014-01-01

    visual appearance. Visual appearance can be problematic due to imaging distortions, but the assumption that local shape structures are sucient to recognise objects and scenes is largely invalid in practise since objects may have similar shape, but dierent texture (e.g., grocery packages). In this work...

  2. Artificial Vision in 3D Perspective. For Object Detection On Planes, Using Points Clouds.

    Catalina Alejandra Vázquez Rodriguez

    2014-02-01

    Full Text Available In this paper, we talk about an algorithm of artificial vision for the robot Golem - II + with which to analyze the environment the robot, for the detection of planes and objects in the scene through point clouds, which were captured with kinect device, possible objects and quantity, distance and other characteristics. Subsequently the "clusters" are grouped to identify whether they are located on the same surface, in order to calculate the distance and the slope of the planes relative to the robot, and finally each object separately analyzed to see if it is possible to take them, if they are empty surfaces, may leave objects on them, long as feasible considering a distance, ignoring false positives as the walls and floor, which for these purposes are not of interest since it is not possible to place objects on the walls and floor are out of range of the robot's arms.

  3. Controlled experimental study depicting moving objects in view-shared time-resolved 3D MRA.

    Mostardi, Petrice M; Haider, Clifton R; Rossman, Phillip J; Borisch, Eric A; Riederer, Stephen J

    2009-07-01

    Various methods have been used for time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of three-dimensional (3D) time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested using view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  4. Controlled Experimental Study Depicting Moving Objects in View-Shared Time-Resolved 3D MRA

    Mostardi, Petrice M.; Haider, Clifton R.; Rossman, Phillip J.; Borisch, Eric A.; Riederer, Stephen J.

    2010-01-01

    Various methods have been used for time-resolved contrast-enhanced MRA (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of 3D time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested, which use view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  5. Can Alex Edit? Nanometre scale 3D nanomechanical imaging of semiconductor structures from few nm to sub-micrometre depths

    Kolosov, Oleg; Dinelli, Franco; Robson, Alexander; Krier, Anthony; Hayne, Manus; Falko, Vladimir; Henini, M

    2015-01-01

    Multilayer structures of active semiconductor devices (1), novel memories (2) and semiconductor interconnects are becoming increasingly three-dimensional (3D) with simultaneous decrease of dimensions down to the few nanometres length scale (3). Ability to test and explore these 3D nanostructures with nanoscale resolution is vital for the optimization of their operation and improving manufacturing processes of new semiconductor devices. While electron and scanning probe microscopes (SPMs) can ...

  6. A Method of Calculating the 3D Coordinates on a Micro Object in a Virtual Micro-Operation System

    2001-01-01

    A simple method for calculating the 3D coordinates of points on a micro object in a multi-camera system is proposed. It simplifies the algorithms used in traditional computer vision system by eliminating the calculation of the CCD ( charge coupled device)camera parameters and the relative position between cameras, and using solid geometry in the calculation procedures instead of the calculation of the complex matrixes. The algorithm was used in the research of generating a virtual magnified 3D image of a micro object to be operated in a micro operation system, and the satisfactory results were obtained. The application in a virtual tele-operation system for a dexterous mechanical gripper is under test.

  7. Tracking of Multiple objects Using 3D Scatter Plot Reconstructed by Linear Stereo Vision

    Safaa Moqqaddem

    2014-10-01

    Full Text Available This paper presents a new method for tracking objects using stereo vision with linear cameras. Edge points extracted from the stereo linear images are first matched to reconstruct points that represent the objects in the scene. To detect the objects, a clustering process based on a spectral analysis is then applied to the reconstructed points. The obtained clusters are finally tracked throughout their center of gravity using Kalman filter and a Nearest Neighbour based data association algorithm. Experimental results using real stereo linear images are shown to demonstrate the effectiveness of the proposed method for obstacle tracking in front of a vehicle.

  8. Robust 3D Objects Localization using Hierarchical Belief Propagation in Real World Environment

    Fakhfakh, N.; Khoudour, L.; El-Koursi, Em; BRUYELLE, JL; Dufaux, A.; Jacot, J.

    2010-01-01

    Technological solutions for obstacle detection systems have been proposed to prevent accidents in safety transport applications. In order to avoid the limits of these proposed technologies an obstacle detection system utilizing stereo cameras is proposed to detect and localize multiple objects at level crossings. A background subtraction module is first performed using the Color Independent Component Analysis (CICA) technique, which has proved its performance against other well-known object d...

  9. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    Frary, R.; Louie, J. [UNR; Pullammanappallil, S. [Optim; Eisses, A.

    2016-08-01

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  10. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.