WorldWideScience

Sample records for 3d object depth

  1. Combining depth and color data for 3D object recognition

    Joergensen, Thomas M.; Linneberg, Christian; Andersen, Allan W.

    1997-09-01

    This paper describes the shape recognition system that has been developed within the ESPRIT project 9052 ADAS on automatic disassembly of TV-sets using a robot cell. Depth data from a chirped laser radar are fused with color data from a video camera. The sensor data is pre-processed in several ways and the obtained representation is used to train a RAM neural network (memory based reasoning approach) to detect different components within TV-sets. The shape recognizing architecture has been implemented and tested in a demonstration setup.

  2. 3D Object Recognition and Facial Identification Using Time-averaged Single-views from Time-of-flight 3D Depth-Camera

    Ding, Hui; Moutarde, Fabien; Shaiek, Ayet

    2010-01-01

    International audience We report here on feasibility evaluation experiments for 3D object recognition and person facial identification from single-view on real depth images acquired with an “off-the-shelf” 3D time-of-flight depth camera. Our methodology is the following: for each person or object, we perform 2 independent recordings, one used for learning and the other one for test purposes. For each recorded frame, a 3D-mesh is computed by simple triangulation from the filtered depth imag...

  3. View-based 3-D object retrieval

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  4. Advanced 3D Object Identification System Project

    National Aeronautics and Space Administration — Optra will build an Advanced 3D Object Identification System utilizing three or more high resolution imagers spaced around a launch platform. Data from each imager...

  5. Lifting Object Detection Datasets into 3D.

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  6. 3D-PRINTING OF BUILD OBJECTS

    SAVYTSKYI M. V.

    2016-03-01

    Full Text Available Raising of problem. Today, in all spheres of our life we can constate the permanent search for new, modern methods and technologies that meet the principles of sustainable development. New approaches need to be, on the one hand more effective in terms of conservation of exhaustible resources of our planet, have minimal impact on the environment and on the other hand to ensure a higher quality of the final product. Construction is not exception. One of the new promising technology is the technology of 3D -printing of individual structures and buildings in general. 3Dprinting - is the process of real object recreating on the model of 3D. Unlike conventional printer which prints information on a sheet of paper, 3D-printer allows you to display three-dimensional information, i.e. creates certain physical objects. Currently, 3D-printer finds its application in many areas of production: machine building elements, a variety of layouts, interior elements, various items. But due to the fact that this technology is fairly new, it requires the creation of detailed and accurate technologies, efficient equipment and materials, and development of common vocabulary and regulatory framework in this field. Research Aim. The analysis of existing methods of creating physical objects using 3D-printing and the improvement of technology and equipment for the printing of buildings and structures. Conclusion. 3D-printers building is a new generation of equipment for the construction of buildings, structures, and structural elements. A variety of building printing technics opens up wide range of opportunities in the construction industry. At this stage, printers design allows to create low-rise buildings of different configurations with different mortars. The scientific novelty of this work is to develop proposals to improve the thermal insulation properties of constructed 3D-printing objects and technological equipment. The list of key terms and notions of construction

  7. 3D TV - looking forward in depth

    Direct viewing of remote handling tasks in decommissioning, operation, inspection and repair of nuclear facilities is constrained by the need to contain the workspace and to provide adequate shielding for operators and other staff. Improvements in camera design and display technology, and an understanding of radiation tolerance and human factors, have been brought together at AEA Technology to provide a range of stereoscopic or 3D TV viewing systems. These allow operators to assess conditions accurately in a remote environment, and can be used either to observe or inspect, and to help in completing complex manipulations and tool deployment. (author)

  8. Faint object 3D spectroscopy with PMAS

    Roth, Martin M.; Becker, Thomas; Kelz, Andreas; Bohm, Petra

    2004-09-01

    PMAS is a fiber-coupled lens array type of integral field spectrograph, which was commissioned at the Calar Alto 3.5m Telescope in May 2001. The optical layout of the instrument was chosen such as to provide a large wavelength coverage, and good transmission from 0.35 to 1 μm. One of the major objectives of the PMAS development has been to perform 3D spectrophotometry, taking advantage of the contiguous array of spatial elements over the 2-dimensional field-of-view of the integral field unit. With science results obtained during the first two years of operation, we illustrate that 3D spectroscopy is an ideal tool for faint object spectrophotometry.

  9. Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection

    This work proposes a novel approach to segmenting randomly stacked objects in unstructured 3D point clouds, which are acquired by a random-speckle 3D imaging system for the purpose of automated object detection and reconstruction. An innovative algorithm is proposed; it is based on a novel concept of 3D watershed segmentation and the strategies for resolving over-segmentation and under-segmentation problems. Acquired 3D point clouds are first transformed into a corresponding orthogonally projected depth map along the optical imaging axis of the 3D sensor. A 3D watershed algorithm based on the process of distance transformation is then performed to detect the boundary, called the edge dam, between stacked objects and thereby to segment point clouds individually belonging to two stacked objects. Most importantly, an object-matching algorithm is developed to solve the over- and under-segmentation problems that may arise during the watershed segmentation. The feasibility and effectiveness of the method are confirmed experimentally. The results reveal that the proposed method is a fast and effective scheme for the detection and reconstruction of a 3D object in a random stack of such objects. In the experiments, the precision of the segmentation exceeds 95% and the recall exceeds 80%. (paper)

  10. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  11. Precise Depth Image Based Real-Time 3D Difference Detection

    Kahn, Svenja

    2014-01-01

    3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the ...

  12. PMAS - Faint Object 3D Spectrophotometry

    Roth, M. M.; Becker, T.; Kelz, A.

    2002-01-01

    will describe PMAS (Potsdam Multiaperture Spectrophotometer) which was commissioned at the Calar Alto Observatory 3.5m Telescope on May 28-31, 2001. PMAS is a dedicated, highly efficient UV-visual integral field spectrograph which is optimized for the spectrophotometry of faint point sources, typically superimposed on a bright background. PMAS is ideally suited for the study of resolved stars in local group galaxies. I will present results of our preliminary work with MPFS at the Russian 6m Telescope in Selentchuk, involving the development of new 3D data reduction software, and observations of faint planetary nebulae in the bulge of M31 for the determination of individual chemical abundances of these objects. Using this data, it will be demonstrated that integral field spectroscopy provides superior techniques for background subtraction, avoiding the otherwise inevitable systematic errors of conventional slit spetroscopy. The results will be put in perspective of the study of resolved stellar populations in nearby galaxies with a new generation of Extremely Large Telescopes.

  13. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  14. Combining different modalities for 3D imaging of biological objects

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  15. Combining Different Modalities for 3D Imaging of Biological Objects

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  16. Holoscopic 3D image depth estimation and segmentation techniques

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  17. Object detection using categorised 3D edges

    Kiforenko, Lilita; Buch, Anders Glent; Bodenhagen, Leon; Krüger, Norbert

    2015-01-01

    is made possible by the explicit use of edge categories in the feature descriptor. We quantitatively compare our approach with the state-of-the-art template based Linemod method, which also provides an effective way of dealing with texture-less objects, tests were performed on our own object dataset......In this paper we present an object detection method that uses edge categorisation in combination with a local multi-modal histogram descriptor, all based on RGB-D data. Our target application is robust detection and pose estimation of known objects. We propose to apply a recently introduced edge...... categorisation algorithm for describing objects in terms of its different edge types. Relying on edge information allow our system to deal with objects with little or no texture or surface variation. We show that edge categorisation improves matching performance due to the higher level of discrimination, which...

  18. Advanced 3D Object Identification System Project

    National Aeronautics and Space Administration — During the Phase I effort, OPTRA developed object detection, tracking, and identification algorithms and successfully tested these algorithms on computer-generated...

  19. Depth enhancement of S3D content and the psychological effects

    Hirahara, Masahiro; Shiraishi, Saki; Kawai, Takashi

    2012-03-01

    Stereoscopic 3D (S3D) imaging technologies are widely used recently to create content for movies, TV programs, games, etc. Although S3D content differs from 2D content by the use of binocular parallax to induce depth sensation, the relationship between depth control and the user experience remains unclear. In this study, the user experience was subjectively and objectively evaluated in order to determine the effectiveness of depth control, such as an expansion or reduction or a forward or backward shift in the range of maximum parallactic angles in the cross and uncross directions (depth bracket). Four types of S3D content were used in the subjective and objective evaluations. The depth brackets of comparison stimuli were modified in order to enhance the depth sensation corresponding to the content. Interpretation Based Quality (IBQ) methodology was used for the subjective evaluation and the heart rate was measured to evaluate the physiological effect. The results of the evaluations suggest the following two points. (1) Expansion/reduction of the depth bracket affects preference and enhances positive emotions to the S3D content. (2) Expansion/reduction of the depth bracket produces above-mentioned effects more notable than shifting the cross/uncross directions.

  20. 3D hand tracking using Kalman filter in depth space

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  1. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  2. Depth-based Multi-View 3D Video Coding

    Zamarin, Marco

    improved, both in terms of objective and visual evaluations. Depth coding based on standard H.264/AVC is explored for multi-view plus depth image coding. A single depth map is used to disparity-compensate multiple views and allow more efficient coding than H.264 MVC at low bit rates. Lossless coding of...... number of standard solutions for lossless coding. New approaches for distributed video-plus-depth coding are also presented in this thesis. Motion correlation between the two signals is exploited at the decoder side to improve the performance of the side information generation algorithm. In addition...... on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode...

  3. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  4. Several Strategies on 3D Modeling of Manmade Objects

    SHAO Zhenfeng; LI Deren; CHENG Qimin

    2004-01-01

    Several different strategies of 3D modeling are adopted for different kinds of manmade objects. Firstly, for those manmade objects with regular structure, if 2D information is available and elevation information can be obtained conveniently, then 3D modeling of them can be executed directly. Secondly, for those manmade objects with complicated structure comparatively and related stereo images pair can be acquired, in the light of topology-based 3D model we finish 3D modeling of them by integrating automatic and semi-automatic object extraction. Thirdly, for the most complicated objects whose geometrical information cannot be got from stereo images pair completely, we turn to topological 3D model based on CAD.

  5. Automation of 3D micro object handling process

    Gegeckaite, Asta; Hansen, Hans Nørgaard

    2007-01-01

    Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due to the small absolute scale. In this article, the results of the pick-and-place operations of three different 3D micro objects were investigated. This study shows that depending on the correct gripping t...

  6. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  7. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    Thabet, Ali Kassem

    2015-04-16

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

  8. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  9. Efficient and high speed depth-based 2D to 3D video conversion

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  10. Object Recognition Using a 3D RFID System

    Roh, Se-gon; Choi, Hyouk Ryeol

    2009-01-01

    Up to now, object recognition in robotics has been typically done by vision, ultrasonic sensors, laser ranger finders etc. Recently, RFID has emerged as a promising technology that can strengthen object recognition. In this chapter, the 3D RFID system and the 3D tag were presented. The proposed RFID system can determine if an object as well as other tags exists, and also can estimate the orientation and position of the object. This feature considerably reduces the dependence of the robot on o...

  11. Monocular model-based 3D tracking of rigid objects

    Lepetit, Vincent

    2014-01-01

    Many applications require tracking complex 3D objects. These include visual serving of robotic arms on specific target objects, Augmented Reality systems that require real time registration of the object to be augmented, and head tracking systems that sophisticated interfaces can use. Computer vision offers solutions that are cheap, practical and non-invasive. ""Monocular Model-Based 3D Tracking of Rigid Objects"" reviews the different techniques and approaches that have been developed by industry and research. First, important mathematical tools are introduced: camera representation, robust e

  12. Embedding objects during 3D printing to add new functionalities.

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  13. A QUALITY ASSESSMENT METHOD FOR 3D ROAD POLYGON OBJECTS

    L. Gao

    2015-08-01

    Full Text Available With the development of the economy, the fast and accurate extraction of the city road is significant for GIS data collection and update, remote sensing images interpretation, mapping and spatial database updating etc. 3D GIS has attracted more and more attentions from academics, industries and governments with the increase of requirements for interoperability and integration of different sources of data. The quality of 3D geographic objects is very important for spatial analysis and decision-making. This paper presents a method for the quality assessment of the 3D road polygon objects which is created by integrating 2D Road Polygon data with LiDAR point cloud and other height information such as Spot Height data in Hong Kong Island. The quality of the created 3D road polygon data set is evaluated by the vertical accuracy, geometric and attribute accuracy, connectivity error, undulation error and completeness error and the final results are presented.

  14. A Primitive-Based 3D Object Recognition System

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  15. Semantic 3D object maps for everyday robot manipulation

    Rusu, Radu Bogdan

    2013-01-01

    The book written by Dr. Radu B. Rusu presents a detailed description of 3D Semantic Mapping in the context of mobile robot manipulation. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models that include the objects present in the world, together with their position, form, and other semantic aspects, as well as interpretations of these objects with respect to the robot tasks.   The book proposes novel 3D feature representations called Point Feature Histograms (PFH), as well as frameworks for the acquisition and processing of Semantic 3D Object Maps with contributions to robust registration, fast segmentation into regions, and reliable object detection, categorization, and reconstruction. These contributions have been fully implemented and empirically evaluated on different robotic systems, and have been the original kernel to the widely successful open-source project the Point Cloud Library (PCL) -- see http://poi...

  16. Automation of 3D micro object handling process

    Gegeckaite, Asta; Hansen, Hans Nørgaard

    2007-01-01

    Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due to ...

  17. Depth map coding using residual segmentation for 3D video system

    Lee, Cheon; Ho, Yo-Sung

    2013-06-01

    Advanced 3D video systems employ multi-view video-plus-depth data to support the free-viewpoint navigation and comfortable 3D viewing; thus efficient depth map coding becomes an important issue. Unlike the color image, the depth map has a property that depth values of the inner part of an object are monotonic, but those of object boundaries change abruptly. Therefore, residual data generated by prediction errors around object boundaries consume many bits in depth map coding. Representing them with segment data can be better than the use of the conventional transformation around the boundary regions. In this paper, we propose an efficient depth map coding method using a residual segmentation instead of using transformation. The proposed residual segmentation divides residual data into two regions with a segment map and two mean values. If the encoder selects the proposed method in terms of rates, two quantized mean values and an index of the segment map are transmitted. Simulation results show significant gains of up to 10 dB compared to the state-of-the-art coders, such as JPEG2000 and H.264/AVC. [Figure not available: see fulltext.

  18. Modeling real conditions of 'Ukrytie' object in 3D measurement

    The article covers a technology of creation on soft products basis for designing: AutoCad, and computer graphics and animation 3D Studio, 3DS MAX, of 3D model of geometrical parameters of current conditions of building structures, technological equipment, fuel-containing materials, concrete, water of ruined Unit 4, 'Ukryttia' object, of Chernobyl NPP. The model built using the above technology will be applied in the future as a basis when automating the design and computer modeling of processes at the 'Ukryttia' object

  19. Algorithms for Haptic Rendering of 3D Objects

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  20. Tracking objects in 3D using Stereo Vision

    Endresen, Kai Hugo Hustoft

    2010-01-01

    This report describes a stereo vision system to be used on a mobile robot. The system is able to triangulate the positions of cylindrical and spherical objects in a 3D environment. Triangulation is done in real-time by matching regions in two images, and calculating the disparities between them.

  1. Radiographic Imagery of a Variable Density 3D Object

    Justin Stottlemyer

    2010-01-01

    Full Text Available The purpose of this project is to develop a mathematical model to study 4D (three spatial dimensions plus density shapes using 3D projections. In the model, the projection is represented as a function that can be applied to data produced by a radiation detector. The projection is visualized as a three-dimensional graph where x and y coordinates represent position and the z coordinate corresponds to the object's density and thickness. Contour plots of such 3D graphs can be used to construct traditional 2D radiographic images.

  2. 3-D Object Recognition from Point Cloud Data

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  3. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  4. Human efficiency for recognizing 3-D objects in luminance noise.

    Tjan, B S; Braje, W L; Legge, G E; Kersten, D

    1995-11-01

    The purpose of this study was to establish how efficiently humans use visual information to recognize simple 3-D objects. The stimuli were computer-rendered images of four simple 3-D objects--wedge, cone, cylinder, and pyramid--each rendered from 8 randomly chosen viewing positions as shaded objects, line drawings, or silhouettes. The objects were presented in static, 2-D Gaussian luminance noise. The observer's task was to indicate which of the four objects had been presented. We obtained human contrast thresholds for recognition, and compared these to an ideal observer's thresholds to obtain efficiencies. In two auxiliary experiments, we measured efficiencies for object detection and letter recognition. Our results showed that human object-recognition efficiency is low (3-8%) when compared to efficiencies reported for some other visual-information processing tasks. The low efficiency means that human recognition performance is limited primarily by factors intrinsic to the observer rather than the information content of the stimuli. We found three factors that play a large role in accounting for low object-recognition efficiency: stimulus size, spatial uncertainty, and detection efficiency. Four other factors play a smaller role in limiting object-recognition efficiency: observers' internal noise, stimulus rendering condition, stimulus familiarity, and categorization across views. PMID:8533342

  5. Surface reconstruction of 3D objects in computerized tomography

    This paper deals with the problem of surface reconstruction of 3D objects from their boundaries in a family of slice images in computerized tomography (CT). Its mathematical formulation is first given, in which it is considered as a problem of functional minimization. Next, the corresponding Euler partial differential equation is derived and it is then solved by the finite difference method. Numerical solution can be found by using the iterative method

  6. Knowledge Base Approach for 3D Objects Detection in Point Clouds Using 3D Processing and Specialists Knowledge

    Ben Hmida, Helmi; Cruz, Christophe; Boochs, Frank; Nicolle, Christophe

    2013-01-01

    International audience This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist's knowledge. Here, we share our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. Thi...

  7. Manipulating 3D Objects with Gaze and Hand Gestures

    Koskenranta, Olli

    2012-01-01

    Gesture-based interaction in consumer electronics is becoming more popular these days, for example, when playing games with Microsoft Kinect, PlayStation 3 Move and Nintendo Wii. The objective of this thesis was to find out how to use gaze and hand gestures for manipulating objects in a 3D space for the best user experience possible. This thesis was made at the University of Oulu, Center for Internet Excellence and was a part of the research project “Chiru”. The goal was to research and p...

  8. Response of 3D Free Rigid Objects under Seismic Excitations

    Yanheng, Li

    2008-01-01

    Previous studies of precariously balanced structures in seismically active regions to provide important information for aseismatic engineering and theoretical seismology are almost found on an oversimplified assumption. According to that, any 3-dimensional practical structure with special symmetry could be regarded as a 2-dimensional finite object in light of the corresponding symmetry. Thus the complex and troublesome problem of 3D rotation, in mathematics, can be reduced to a tractable one of 1D rotation but a distorted description of the real motion in physics. To gain an actual evolution of precariously balanced structures bearing various levels of ground accelerations, we should address ourselves to a 3D calculation. In this study, the responses of a cylinder under a set of half- and full-sine-wave excitations with different frequencies related to seismic ground motion are investigated in virtue of some reasonable works from a number of mechanicians. A computer program is also developed possibly to study...

  9. Weighted Unsupervised Learning for 3D Object Detection

    Kamran Kowsari

    2016-01-01

    Full Text Available This paper introduces a novel weighted unsuper-vised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene.

  10. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  11. A new method to create depth information based on lighting analysis for 2D/3D conversion

    Hyunho; Han; Gangseong; Lee; Jongyong; Lee; Jinsoo; Kim; Sanghun; Lee

    2013-01-01

    A new method creating depth information for 2D/3D conversion was proposed. The distance between objects is determined by the distances between objects and light source position which is estimated by the analysis of the image. The estimated lighting value is used to normalize the image. A threshold value is determined by some weighted operation between the original image and the normalized image. By applying the threshold value to the original image, background area is removed. Depth information of interested area is calculated from the lighting changes. The final 3D images converted with the proposed method are used to verify its effectiveness.

  12. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  13. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  14. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  15. Performance Evaluating of some Methods in 3D Depth Reconstruction from a Single Image

    Wen, Wei

    2009-01-01

    We studied the problem of 3D reconstruction from a single image. The 3D reconstruction is one of the basic problems in Computer Vision. The 3D reconstruction is usually achieved by using two or multiple images of a scene. However recent researches in Computer Vision field have enabled us to recover the 3D information even from only one single image. The methods used in such reconstructions are based on depth information, projection geometry, image content, human psychology and so on. Each met...

  16. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  17. A Prototypical 3D Graphical Visualizer for Object-Oriented Systems

    1996-01-01

    is paper describes a framework for visualizing object-oriented systems within a 3D interactive environment.The 3D visualizer represents the structure of a program as Cylinder Net that simultaneously specifies two relationships between objects within 3D virtual space.Additionally,it represents additional relationships on demand when objects are moved into local focus.The 3D visualizer is implemented using a 3D graphics toolkit,TOAST,that implements 3D Widgets 3D graphics to ease the programming task for 3D visualization.

  18. Incipit 3D documentations projects: some examples and objectives

    Mañana-Borrazás, Patricia

    2013-01-01

    Presentación de la autora y del Incipit y su orientación respecto al uso de nuevas tecnologías aplicadas a la documentación 3D del patrimonio, con especial atención a los retos que supone este tipo de tecnologías en la “Virtual Heritage School on Digital Cultural Heritage 2013 (3D documentation, knowledge repositories and creative industries)” Nicosia 30 de mayo de 2013.

  19. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  20. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed met...

  1. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  2. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  3. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  4. Depth-Based Object Tracking Using a Robust Gaussian Filter

    Issac, Jan; Wüthrich, Manuel; Cifuentes, Cristina Garcia; Bohg, Jeannette; Trimpe, Sebastian; Schaal, Stefan

    2016-01-01

    We consider the problem of model-based 3D-tracking of objects given dense depth images as input. Two difficulties preclude the application of a standard Gaussian filter to this problem. First of all, depth sensors are characterized by fat-tailed measurement noise. To address this issue, we show how a recently published robustification method for Gaussian filters can be applied to the problem at hand. Thereby, we avoid using heuristic outlier detection methods that simply reject measurements i...

  5. Depth and Intensity Gabor Features Based 3D Face Recognition Using Symbolic LDA and AdaBoost

    P. S. Hiremath

    2013-11-01

    Full Text Available In this paper, the objective is to investigate what contributions depth and intensity information make to the solution of face recognition problem when expression and pose variations are taken into account, and a novel system is proposed for combining depth and intensity information in order to improve face recognition performance. In the proposed approach, local features based on Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in symbolic linear discriminant analysis (Symbolic LDA with AdaBoost learning is proposed to select the most effective and robust features and to construct a strong classifier. Experiments are performed on the three datasets, namely, Texas 3D face database, Bhosphorus 3D face database and CASIA 3D face database, which contain face images with complex variations, including expressions, poses and longtime lapses between two scans. The experimental results demonstrate the enhanced effectiveness in the performance of the proposed method. Since most of the design processes are performed automatically, the proposed approach leads to a potential prototype design of an automatic face recognition system based on the combination of the depth and intensity information in face images.

  6. A new method to enlarge a range of continuously perceived depth in DFD (depth-fused 3D) display

    Tsunakawa, Atsuhiro; Soumiya, Tomoki; Horikawa, Yuta; Yamamoto, Hirotsugu; Suyama, Shiro

    2013-03-01

    We can successfully solve the problem in DFD display that the maximum depth difference of front and rear planes is limited because depth fusing from front and rear images to one 3-D image becomes impossible. The range of continuously perceived depth was estimated as depth difference of front and rear planes increases. When the distance was large enough, perceived depth was near front plane at 0~40 % of rear luminance and near rear plane at 60~100 % of rear luminance. This maximum depth range can be successfully enlarged by spatial-frequency modulation of front and rear images. The change of perceived depth dependence was evaluated when high frequency component of front and rear images is cut off using Fourier Transformation at the distance between front and rear plane of 5 and 10 cm (4.9 and 9.4 minute of arc). When high frequency component does not cut off enough at the distance of 5 cm, perceived depth was separated to near front plane and near rear plane. However, when the images are blurred enough by cutting high frequency component, the perceived depth has a linear dependency on luminance ratio. When the images are not blurred at the distance of 10 cm, perceived depth is separated to near front plane at 0~30% of rear luminance, near rear plane at 80~100 % and near midpoint at 40~70 %. However, when the images are blurred enough, perceived depth successfully has a linear dependency on luminance ratio.

  7. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  8. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  9. A joint multi-view plus depth image coding scheme based on 3D-warping

    Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone;

    2011-01-01

    scene structure that can be effectively exploited to improve the performance of multi-view coding schemes. In this paper we introduce a novel coding architecture that replaces the inter-view motion prediction operation with a 3D warping approach based on depth information to improve the coding...

  10. Learning the 3-D structure of objects from 2-D views depends on shape, not format.

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-05-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  11. Object-oriented urban 3D spatial data model organization method

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  12. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  13. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  14. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-06-01

    Full Text Available Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.

  15. A Taxonomy of 3D Occluded Objects Recognition Techniques

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  16. AFFINE INVARIANT OF 3D OBJECTS USING STATISTICAL AND ALGEBRAIC COEFFICIENTS

    lhachloufi Mostafa,

    2011-03-01

    Full Text Available The increasing number of objects 3D are available on the Internet or in specialized databases and require the establishment of methods to develop description and recognition techniques[1,2,3] to access intelligently to the contents of these objects . In this context, our work whose objective is to present an affine invariants methods [4,5] of 3D objects . The proposed methods based on the extraction of statistical and algebraic coefficients from the 3D object, these coefficients remain invariant against affine transformation of the 3D object. In this work, the 3D objects are transformations of 3D objects by one element of the overall transformation. The set of transformations considered here is the general affine group. The measure of similarity between two descriptor vector objects is achieved by a similarity function using the euclidean distance..

  17. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  18. Estimation of foot pressure from human footprint depths using 3D scanner

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  19. A robotic assembly procedure using 3D object reconstruction

    Chrysostomou, Dimitrios; Bitzidou, Malamati; Gasteratos, Antonios

    The use of robotic systems for rapid manufacturing and intelligent automation has attracted growing interest in recent years. Specifically, the generation and planning of an object assembly sequence is becoming crucial as it can reduce significantly the production costs and accelerate the full...... implemented by a 5 d.o.f. robot arm and a gripper. The final goal is to plan a path for the robot arm, consisting of predetermined paths and motions for the automatic assembly of ordinary objects....

  20. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    Dennis Edler

    Full Text Available Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems or additional artificial layers (coordinate grids, provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids and content-related, irregular line features (i.e. highways and main streets in official urban topographic maps (scale 1/10,000 further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate and the mean distances of correctly recalled objects (spatial accuracy. It is shown that the True-3D accentuating of grids (depth offset: 5 cm significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  1. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208

  2. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    Shouyi Yin

    2015-06-01

    Full Text Available In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video.

  3. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps.

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  4. Azimuth–opening angle domain imaging in 3D Gaussian beam depth migration

    Common-image gathers indexed by opening angle and azimuth at imaging points in 3D situations are the key inputs for amplitude-variation-with-angle and velocity analysis by tomography. The Gaussian beam depth migration, propagating each ray by a Gaussian beam form and summing the contributions from all the individual beams to produce the wavefield, can overcome the multipath problem, image steep reflectors and, even more important, provide a convenient and efficient strategy to extract azimuth–opening angle domain common-image gathers (ADCIGs) in 3D seismic imaging. We present a method for computing azimuth and opening angle at imaging points to output 3D ADCIGs by computing the source and receiver wavefield direction vectors which are restricted in the effective region of the corresponding Gaussian beams. In this paper, the basic principle of Gaussian beam migration (GBM) is briefly introduced; the technology and strategy to yield ADCIGs by GBM are analyzed. Numerical tests and field data application demonstrate that the azimuth–opening angle domain imaging method in 3D Gaussian beam depth migration is effective. (paper)

  5. Retrieval of Arbitrary 3D Objects From Robot Observations

    Bore, Nils; Jensfelt, Patric; Folkesson, John

    2015-01-01

    We have studied the problem of retrieval of arbi-trary object instances from a large point cloud data set. Thecontext is autonomous robots operating for long periods of time,weeks up to months and regularly saving point cloud data. Theever growing collection of data is stored in a way that allowsranking candidate examples of any query object, given in theform of a single view point cloud, without the need to accessthe original data. The top ranked ones can then be compared ina second phase us...

  6. RECONSTRUCCIÓN DE OBJETO 3D A PARTIR DE IMÁGENES CALIBRADAS 3D OBJECT RECONSTRUCTION WITH CALIBRATED IMAGES

    Natividad Grandón-Pastén

    2007-08-01

    Full Text Available Este trabajo presenta el desarrollo de un sistema de reconstrucción de objeto 3D, a partir de una colección de vistas. El sistema se compone de dos módulos principales. El primero realiza el procesamiento de imagen, cuyo objetivo es determinar el mapa de profundidad en un par de vistas, donde cada par de vistas sucesivas sigue una secuencia de fases: detección de puntos de interés, correspondencia de puntos y reconstrucción de puntos; en el proceso de reconstrucción se determinan los parámetros que describen el movimiento (matriz de rotación R y el vector de traslación T entre las dos vistas. Esta secuencia de pasos se repite para todos los pares de vista sucesivas del conjunto. El segundo módulo tiene como objetivo crear el modelo 3D del objeto, para lo cual debe determinar el mapa total de todos los puntos 3D generados; en cada iteración del módulo anterior, una vez obtenido el mapa de profundidad total, genera la malla 3D, aplicando el método de triangulación de Delaunay [28]. Los resultados obtenidos del proceso de reconstrucción son modelados en un ambiente virtual VRML para obtener una visualización más realista del objeto.The system is composed of two main modules. The first one, carries out the image prosecution, whose objective is to determine the depth map of a pair of views where each pair of successive views continues a sequence of phases: interest points detection, points correspondence and points reconstruction; in the reconstruction process, is determined the parameters that describe the movement (rotation matrix R and the translation vector T between the two views. This an sequence of steps is repeated for all the peers of successive views of the set. The second module has as objective to create the 3D model of the object, for it should determine the total map of all the 3D points generated, by each iteration of the previous module, once obtained the map of total depth generates the 3D netting, applying the

  7. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    Sturm, Peter; Maybank, Steve

    1999-01-01

    We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  8. Learning Spatial Relations between Objects From 3D Scenes

    Fichtl, Severin; Alexander, John; Guerin, Frank;

    2013-01-01

    Ongoing cognitive development during the first years of human life may be the result of a set of developmental mechanisms which are in continuous operation [1]. One such mechanism identified is the ability of the developing child to learn effective preconditions for their behaviours. It has been ...... suggested [2] that through the application of behaviours involving more than one object, infants begin to learn about the relations between objects.......Ongoing cognitive development during the first years of human life may be the result of a set of developmental mechanisms which are in continuous operation [1]. One such mechanism identified is the ability of the developing child to learn effective preconditions for their behaviours. It has been...

  9. Deeply Exploit Depth Information for Object Detection

    Hou, Saihui; Wang, Zilei; Wu, Feng

    2016-01-01

    This paper addresses the issue on how to more effectively coordinate the depth with RGB aiming at boosting the performance of RGB-D object detection. Particularly, we investigate two primary ideas under the CNN model: property derivation and property fusion. Firstly, we propose that the depth can be utilized not only as a type of extra information besides RGB but also to derive more visual properties for comprehensively describing the objects of interest. So a two-stage learning framework con...

  10. Gravity data inversion as a probe for the 3D shape at depth of granitic bodies

    Granitic intrusions represent potential sites for waste disposal. A well constrained determination of their geometry at depth is of importance to evaluate possible leakage and seepage within the surroundings. Among geophysical techniques, gravity remains the best suited method to investigate the 3D shape of the granitic bodies at depth. During uranium exploration programmes, many plutons emplaced within different geochemical and tectonic environment have been surveyed. The quality of gravity surveying depends on the intrinsic accuracy of the measurements, and also on their density of coverage. A regularly spaced and dense coverage (about 1 point/km2) of measurements over the whole pluton and its nearby surroundings is needed to represent the gravity effect of density variations. This yields a lateral resolution of about 0.5 kilometer, or less depending on depth and roughness of the floor, for the interpretation of the Bouguer anomaly map. We recommend the use of a 3D iterative method of data inversion, simpler to run when the geometry and distribution of the sources are already constrained by surface data. This method must take into account the various density changes within the granite and its surroundings, as well as the regional effect of deep regional sources. A total error in the input data (measurements, densities, regional field) is estimated at 6%. We estimate that the total uncertainty on the calculated depth values does not exceed ± 15%. Because of good coverage of gravity measurements, the overall shape of the pluton is certainly better constrained than the depth values themselves. We present several examples of gravity data inversion over granitic intrusions displaying various 3D morphologies. At a smaller scale mineralizations are also observed above or close to the root zones. Those examples demonstrate the adequacy of joint studies in constraining the mode of magma emplacement before further studies focussing to environmental problems. 59 refs, 9

  11. 3D Spectroscopy of Herbig-Haro objects

    López, R; Exter, K M; García-Lorenzo, B; Gómez, G; Meteorologia, D A; Riera, A; Sánchez, S F; Meteorologia, Departament d'Astronomia i

    2005-01-01

    HH 110 and HH 262 are two Herbig-Haro jets with rather peculiar, chaotic morphology. In the two cases, no source suitable to power the jet has been detected along the outflow, at optical or radio wavelengths. Both, previous data and theoretical models, suggest that these objects are tracing an early stage of an HH jet/dense cloud interaction. We present the first results of the integral field spectroscopy observations made with the PMAS spectrophotometer (with the PPAK configuration) of these two turbulent jets. New data of the kinematics in several characteristic HH emission lines are shown. In addition, line-ratio maps have been made, suitable to explore the spatial excitation an density conditions of the jets as a function of their kinematics.

  12. A Normalization Method of Moment Invariants for 3D Objects on Different Manifolds

    HU Ping; XU Dong; LI Hua

    2014-01-01

    3D objects can be stored in computer of different describing ways, such as point set, polyline, polygonal surface and Euclidean distance map. Moment invariants of different orders may have the different magnitude. A method for normalizing moments of 3D objects is proposed, which can set the values of moments of different orders roughly in the same range and be applied to different 3D data formats universally. Then accurate computation of moments for several objects is presented and experiments show that this kind of normalization is very useful for moment invariants in 3D objects analysis and recognition.

  13. Monocular display unit for 3D display with correct depth perception

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  14. Rapid and Inexpensive Reconstruction of 3D Structures for Micro-Objects Using Common Optical Microscopy

    Berejnov, V V

    2009-01-01

    A simple method of constructing the 3D surface of non-transparent micro-objects by extending the depth-of-field on the whole attainable surface is presented. The series of images of a sample are recorded by the sequential movement of the sample with respect to the microscope focus. The portions of the surface of the sample appear in focus in the different images in the series. The indexed series of the in-focus portions of the sample surface is combined in one sharp 2D image and interpolated into the 3D surface representing the surface of an original micro-object. For an image acquisition and processing we use a conventional upright stage microscope that is operated manually, the inexpensive Helicon Focus software, and the open source MeshLab software. Three objects were tested: an inclined flat glass slide with an imprinted 10 um calibration grid, a regular metal 100x100 per inch mesh, and a highly irregular surface of a material known as a porous electrode used in polyelectrolyte fuel cells. The accuracy of...

  15. Single-pixel 3D imaging with time-based depth resolution

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  16. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  17. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale

  18. An object-oriented 3D integral data model for digital city and digital mine

    Wu, Lixin; Wang, Yanbing; Che, Defu; Xu, Lei; Chen, Xuexi; Jiang, Yun; Shi, Wenzhong

    2005-10-01

    With the rapid development of urban, city space extended from surface to subsurface. As the important data source for the representation of city spatial information, 3D city spatial data have the characteristics of multi-object, heterogeneity and multi-structure. It could be classified referring to the geo-surface into three kinds: above-surface data, surface data and subsurface data. The current research on 3D city spatial information system is divided naturally into two different branch, 3D City GIS (3D CGIS) and 3D Geological Modeling (3DGM). The former emphasizes on the 3D visualization of buildings and the terrain of city, while the latter emphasizes on the visualization of geological bodies and structures. Although, it is extremely important for city planning and construction to integrate all the city spatial information including above-surface, surface and subsurface objects to conduct integral analysis and spatial manipulation. However, either 3D CGIS or 3DGM is currently difficult to realize the information integration, integral analysis and spatial manipulation. Considering 3D spatial modeling theory and methodologies, an object-oriented 3D integral spatial data model (OO3D-ISDM) is presented and software realized. The model integrates geographical objects, surface buildings and geological objects together seamlessly with TIN being its coupling interface. This paper introduced the conceptual model of OO3D-ISDM, which is comprised of 4 spatial elements, i.e. point, line, face and body, and 4 geometric primitives, i.e. vertex, segment, triangle and generalized tri-prism (GTP). The spatial model represents the geometry of surface buildings and geographical objects with triangles, and geological objects with GTP. Any of the represented objects, no mater surface buildings, terrain or subsurface objects, could be described with the basic geometry element, i.e. triangle. So the 3D spatial objects, surface buildings, terrain and geological objects can be

  19. Multi-layer 3D imaging using a few viewpoint images and depth map

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  20. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    Chien-Ho Ko

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the alg...

  1. Interlayer Simplified Depth Coding for Quality Scalability on 3D High Efficiency Video Coding

    Mengmeng Zhang

    2014-01-01

    Full Text Available A quality scalable extension design is proposed for the upcoming 3D video on the emerging standard for High Efficiency Video Coding (HEVC. A novel interlayer simplified depth coding (SDC prediction tool is added to reduce the amount of bits for depth maps representation by exploiting the correlation between coding layers. To further improve the coding performance, the coded prediction quadtree and texture data from corresponding SDC-coded blocks in the base layer can be used in interlayer simplified depth coding. In the proposed design, the multiloop decoder solution is also extended into the proposed scalable scenario for texture views and depth maps, and will be achieved by the interlayer texture prediction method. The experimental results indicate that the average Bjøntegaard Delta bitrate decrease of 54.4% can be gained in interlayer simplified depth coding prediction tool on multiloop decoder solution compared with simulcast. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  2. An Overview of 3d Topology for Ladm-Based Objects

    Zulkifli, N. A.; Rahman, A. A.; van Oosterom, P.

    2015-10-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological models are based on several main aspects (e.g. space or plane partition, used primitives, constructive rules, orientation and explicit or implicit relationships). The most suitable 3D topological model depends on the type of application it is used for. There is no single 3D topology model best suitable for all types of applications. Therefore, it is very important to define the requirements of the 3D topology model. The context of this paper is a 3D topology for LADM-based objects.

  3. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  4. Constructing Isosurfaces from 3D Data Sets Taking Account of Depth Sorting of Polyhedra

    周勇; 唐泽圣

    1994-01-01

    Creating and rendering intermediate geometric primitives is one of the approaches to visualisze data sets in 3D space.Some algorithms have been developed to construct isosurface from uniformly distributed 3D data sets.These algorithms assume that the function value varies linearly along edges of each cell.But to irregular 3D data sets,this assumption is inapplicable.Moreover,the detth sorting of cells is more complicated for irregular data sets,which is indispensable for generating isosurface images or semitransparent isosurface images,if Z-buffer method is not adopted.In this paper,isosurface models based on the assumption that the function value has nonlinear distribution within a tetrahedron are proposed.The depth sorting algorithm and data structures are developed for the irregular data sets in which cells may be subdivided into tetrahedra.The implementation issues of this algorithm are discussed and experimental results are shown to illustrate potentials of this technique.

  5. IMPROVEMENT OF 3D MONTE CARLO LOCALIZATION USING A DEPTH CAMERA AND TERRESTRIAL LASER SCANNER

    S. Kanai

    2015-05-01

    Full Text Available Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  6. Temporal-spatial modeling of fast-moving and deforming 3D objects

    Wu, Xiaoliang; Wei, Youzhi

    1998-09-01

    This paper gives a brief description of the method and techniques developed for the modeling and reconstruction of fast moving and deforming 3D objects. A new approach using close-range digital terrestrial photogrammetry in conjunction with high speed photography and videography is proposed. A sequential image matching method (SIM) has been developed to automatically process pairs of images taken continuously of any fast moving and deforming 3D objects. Using the SIM technique a temporal-spatial model (TSM) of any fast moving and deforming 3D objects can be developed. The TSM would include a series of reconstructed surface models of the fast moving and deforming 3D object in the form of 3D images. The TSM allows the 3D objects to be visualized and analyzed in sequence. The SIM method, specifically the left-right matching and forward-back matching techniques are presented in the paper. An example is given which deals with the monitoring of a typical blast rock bench in a major open pit mine in Australia. With the SIM approach and the TSM model it is possible to automatically and efficiently reconstruct the 3D images of the blasting process. This reconstruction would otherwise be impossible to achieve using a labor intensive manual processing approach based on 2D images taken from conventional high speed cameras. The case study demonstrates the potential of the SIM approach and the TSM for the automatic identification, tracking and reconstruction of any fast moving and deforming 3D targets.

  7. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    J. Javier Yebes; Bergasa, Luis M.; Miguel García-Garrido

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban sce...

  8. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  9. Visualization of the ROOT 3D class objects with OpenInventor-like viewers

    The class library for conversion of the ROOT 3D class objects to the .iv format for 3D image viewers is described in this paper. At present the library was tested using the STAR and ATLAS detector geometry without any changes and revision for concrete detector

  10. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  11. Novel 3-D Object Recognition Methodology Employing a Curvature-Based Histogram

    Liang-Chia Chen

    2013-07-01

    Full Text Available In this paper, a new object recognition algorithm employing a curvature-based histogram is presented. Recognition of three-dimensional (3-D objects using range images remains one of the most challenging problems in 3-D computer vision due to its noisy and cluttered scene characteristics. The key breakthroughs for this problem mainly lie in defining unique features that distinguish the similarity among various 3-D objects. In our approach, an object detection scheme is developed to identify targets underlining an automated search in the range images using an initial process of object segmentation to subdivide all possible objects in the scenes and then applying a process of object recognition based on geometric constraints and a curvature-based histogram for object recognition. The developed method has been verified through experimental tests for its feasibility confirmation.

  12. 3D reconstruction in PET cameras with irregular sampling and depth of interaction

    We present 3D reconstruction algorithms that address fully 3D tomographic reconstruction using septa-less, stationary, and rectangular cameras. The field of view (FOV) encompasses the entire volume enclosed by detector modules capable of measuring depth of interaction (DOI). The Filtered Backprojection based algorithms incorporate DOI, accommodate irregular sampling, and minimize interpolation in the data by defining lines of response between the measured interaction points. We use fixed-width, evenly spaced radial bins in order to use the FFT, but use irregular angular sampling to minimize the number of unnormalizable zero efficiency sinogram bins. To address persisting low efficiency bins, we perform 2D nearest neighbor radial smoothing, employ a semi-iterative procedure to estimate the unsampled data, and mash the ''in plane'' and the first oblique projections to reconstruct the 2D image in the 3DRP algorithm. We present artifact free, essentially spatially isotropic images of Monte Carlo data with FWHM resolutions o 1.50 mm. 2.25 mm, and 3.00 mm at the center, in the bulk, and at the edges and corners of the FOV respectively

  13. Plasma penetration depth and mechanical properties of atmospheric plasma-treated 3D aramid woven composites

    Three-dimensional aramid woven fabrics were treated with atmospheric pressure plasmas, on one side or both sides to determine the plasma penetration depth in the 3D fabrics and the influences on final composite mechanical properties. The properties of the fibers from different layers of the single side treated fabrics, including surface morphology, chemical composition, wettability and adhesion properties were investigated using scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), contact angle measurement and microbond tests. Meanwhile, flexural properties of the composites reinforced with the fabrics untreated and treated on both sides were compared using three-point bending tests. The results showed that the fibers from the outer most surface layer of the fabric had a significant improvement in their surface roughness, chemical bonding, wettability and adhesion properties after plasma treatment; the treatment effect gradually diminished for the fibers in the inner layers. In the third layer, the fiber properties remained approximately the same to those of the control. In addition, three-point bending tests indicated that the 3D aramid composite had an increase of 11% in flexural strength and 12% in flexural modulus after the plasma treatment. These results indicate that composite mechanical properties can be improved by the direct fabric treatment instead of fiber treatment with plasmas if the fabric is less than four layers thick

  14. Im2Fit: Fast 3D Model Fitting and Anthropometrics using Single Consumer Depth Camera and Synthetic Data

    Wang, Qiaosong; Jagadeesh, Vignesh; Ressler, Bryan; Piramuthu, Robinson

    2014-01-01

    Recent advances in consumer depth sensors have created many opportunities for human body measurement and modeling. Estimation of 3D body shape is particularly useful for fashion e-commerce applications such as virtual try-on or fit personalization. In this paper, we propose a method for capturing accurate human body shape and anthropometrics from a single consumer grade depth sensor. We first generate a large dataset of synthetic 3D human body models using real-world body size distributions. ...

  15. Intuitiveness 3D objects Interaction in Augmented Reality Using S-PI Algorithm

    Ajune Wanis Ismail

    2013-07-01

    Full Text Available Numbers of researchers have developed interaction techniques in Augmented Reality (AR application. Some of them proposed new technique for user interaction with different types of interfaces which could bring great promise for intuitive user interaction with 3D data naturally. This paper will explore the 3D object manipulation performs in single-point interaction (S-PI technique in AR environment. The new interaction algorithm, S-PI technique, is point-based intersection designed to detect the interaction’s behaviors such as translate, rotate, clone and for intuitive 3D object handling. The S-PI technique is proposed with marker-based tracking in order to improve the trade-off between the accuracy and speed in manipulating 3D object in real-time. The method is robust required to ensure both elements of real and virtual can be combined relative to the user’s viewpoints and reduce system lag.  

  16. The role of the foreshortening cue in the perception of 3D object slant.

    Ivanov, Iliya V; Kramer, Daniel J; Mullen, Kathy T

    2014-01-01

    Slant is the degree to which a surface recedes or slopes away from the observer about the horizontal axis. The perception of surface slant may be derived from static monocular cues, including linear perspective and foreshortening, applied to single shapes or to multi-element textures. It is still unclear the extent to which color vision can use these cues to determine slant in the absence of achromatic contrast. Although previous demonstrations have shown that some pictures and images may lose their depth when presented at isoluminance, this has not been tested systematically using stimuli within the spatio-temporal passband of color vision. Here we test whether the foreshortening cue from surface compression (change in the ratio of width to length) can induce slant perception for single shapes for both color and luminance vision. We use radial frequency patterns with narrowband spatio-temporal properties. In the first experiment, both a manual task (lever rotation) and a visual task (line rotation) are used as metrics to measure the perception of slant for achromatic, red-green isoluminant and S-cone isolating stimuli. In the second experiment, we measure slant discrimination thresholds as a function of depicted slant in a 2AFC paradigm and find similar thresholds for chromatic and achromatic stimuli. We conclude that both color and luminance vision can use the foreshortening of a single surface to perceive slant, with performances similar to those obtained using other strong cues for slant, such as texture. This has implications for the role of color in monocular 3D vision, and the cortical organization used in 3D object perception. PMID:24216007

  17. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric en-gineering geology,a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then,according to the classified thought of the object-oriented technique,the different 3D models of geological and engi-neering objects were realized based on the data structure,including terrain class,strata class,fault class,and limit class;and the modeling mechanism was alterna-tive. Finally,the 3D integrated model was established by Boolean operations be-tween 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification,the arbitrary slicing analysis of the 3D model,the geological analysis of the dam,and underground engineering. They provide powerful theoretical prin-ciples and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  18. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    ZHONG DengHua; LI MingChao; LIU Jie

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric engineering geology, a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then, according to the classified thought of the object-oriented technique, the different 3D models of geological and engineering objects were realized based on the data structure, including terrain class,strata class, fault class, and limit class; and the modeling mechanism was alternative. Finally, the 3D integrated model was established by Boolean operations between 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification, the arbitrary slicing analysis of the 3D model, the geological analysis of the dam, and underground engineering. They provide powerful theoretical principles and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  19. Liquid Phase 3D Printing for Quickly Manufacturing Metal Objects with Low Melting Point Alloy Ink

    Wang, Lei

    2014-01-01

    Conventional 3D printings are generally time-consuming and printable metal inks are rather limited. From an alternative way, we proposed a liquid phase 3D printing for quickly making metal objects. Through introducing metal alloys whose melting point is slightly above room temperature as printing inks, several representative structures spanning from one, two and three dimension to more complex patterns were demonstrated to be quickly fabricated. Compared with the air cooling in a conventional 3D printing, the liquid-phase-manufacturing offers a much higher cooling rate and thus significantly improves the speed in fabricating metal objects. This unique strategy also efficiently prevents the liquid metal inks from air oxidation which is hard to avoid otherwise in an ordinary 3D printing. Several key physical factors (like properties of the cooling fluid, injection speed and needle diameter, types and properties of the printing ink, etc.) were disclosed which would evidently affect the printing quality. In addit...

  20. The Object Projection Feature Estimation Problem in Unsupervised Markerless 3D Motion Tracking

    Quesada, Luis

    2011-01-01

    3D motion tracking is a critical task in many computer vision applications. Existing 3D motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on 3D motion tracking. 3D motion tracking systems that require no knowledge on the target object and run on a single low-budget camera require estimations of the object projection features (namely, area and position). In this paper, we define the object projection feature estimation problem and we present a novel 3D motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera, as installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a non-modeled unmarked object that may be non-rigid, non-convex, partially occluded, self occluded, or motion blurred, given that it is opaque, evenly colored, and enough contrasting with t...

  1. Web based Interactive 3D Learning Objects for Learning Management Systems

    Stefan Hesse; Stefan Gumhold

    2012-01-01

    In this paper, we present an approach to create and integrate interactive 3D learning objects of high quality for higher education into a learning management system. The use of these resources allows to visualize topics, such as electro-technical and physical processes in the interior of complex devices. This paper addresses the challenge of combining rich interactivity and adequate realism with 3D exercise material for distance elearning.

  2. 3D photography in the objective analysis of volume augmentation including fat augmentation and dermal fillers.

    Meier, Jason D; Glasgold, Robert A; Glasgold, Mark J

    2011-11-01

    The authors present quantitative and objective 3D data from their studies showing long-term results with facial volume augmentation. The first study analyzes fat grafting of the midface and the second study presents augmentation of the tear trough with hyaluronic filler. Surgeons using 3D quantitative analysis can learn the duration of results and the optimal amount to inject, as well as showing patients results that are not demonstrable with standard, 2D photography. PMID:22004863

  3. Liquid Phase 3D Printing for Quickly Manufacturing Metal Objects with Low Melting Point Alloy Ink

    Wang, Lei; Jing LIU

    2014-01-01

    Conventional 3D printings are generally time-consuming and printable metal inks are rather limited. From an alternative way, we proposed a liquid phase 3D printing for quickly making metal objects. Through introducing metal alloys whose melting point is slightly above room temperature as printing inks, several representative structures spanning from one, two and three dimension to more complex patterns were demonstrated to be quickly fabricated. Compared with the air cooling in a conventional...

  4. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Koeva, M. N.

    2016-01-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interact...

  5. 3D Imaging of Dielectric Objects Buried under a Rough Surface by Using CSI

    Evrim Tetik

    2015-01-01

    Full Text Available A 3D scalar electromagnetic imaging of dielectric objects buried under a rough surface is presented. The problem has been treated as a 3D scalar problem for computational simplicity as a first step to the 3D vector problem. The complexity of the background in which the object is buried is simplified by obtaining Green’s function of its background, which consists of two homogeneous half-spaces, and a rough interface between them, by using Buried Object Approach (BOA. Green’s function of the two-part space with planar interface is obtained to be used in the process. Reconstruction of the location, shape, and constitutive parameters of the objects is achieved by Contrast Source Inversion (CSI method with conjugate gradient. The scattered field data that is used in the inverse problem is obtained via both Method of Moments (MoM and Comsol Multiphysics pressure acoustics model.

  6. 3D Projection on Physical Objects: Design Insights from Five Real Life Cases

    Dalsgaard, Peter; Halskov, Kim

    2011-01-01

    have developed installations that employ 3D projection on physical objects. The installations have been developed in collaboration with external partners and have been put into use in real-life settings such as museums, exhibitions and interaction design laboratories. On the basis of these cases, we......3D projection on physical objects is a particular kind of Augmented Reality that augments a physical object by projecting digital content directly onto it, rather than by using a mediating device, such as a mobile phone or a head- mounted display. In this paper, we present five cases in which we...

  7. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  8. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe.

    Harris, EJ; Miller, NR; Bamber, JC; Symonds-Tayler, JR; Evans, PM

    2011-01-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevatio...

  9. Ultrasonic cleaning of 3D printed objects and Cleaning Challenge Devices

    Verhaagen, Bram; Zanderink, Thijs; Fernandez Rivas, David

    2016-01-01

    We report our experiences in the evaluation of ultrasonic cleaning processes of objects made with additive manufacturing techniques, specifically three-dimensional (3D) printers. These objects need to be cleaned of support material added during the printing process. The support material can be remov

  10. Autonomous 3D Modeling of Unknown Objects for Active Scene Exploration

    Kriegel, Simon

    2015-01-01

    The thesis Autonomous 3D Modeling of Unknown Objects for Active Scene Exploration presents an approach for efficient model generation of small-scale objects applying a robot-sensor system. Active scene exploration incorporates object recognition methods for analyzing a scene of partially known objects as well as exploration approaches for autonomous modeling of unknown parts. Here, recognition, exploration, and planning methods are extended and combined in a single scene exploration system, e...

  11. Accurate 3D shape measurement of multiple separate objects with stereo vision

    3D shape measurement has emerged as a very useful tool in numerous fields because of its wide and ever-increasing applications. In this paper, we present a passive, fast and accurate 3D shape measurement technique using stereo vision approach. The technique first employs a scale-invariant feature transform algorithm to detect point matches at a number of discrete locations despite the discontinuities in the images. Then an automated image registration algorithm is applied to find full-field point matches with subpixel accuracy. After that, the 3D shapes of the objects can be reconstructed according to the obtained point matching and the camera information. The proposed technique is capable of performing a full-field 3D shape measurement with high accuracy even in the presence of discontinuities and multiple separate regions. The validity is verified by experiments. (paper)

  12. Generation of geometric representations of 3D objects in CAD/CAM by digital photogrammetry

    Li, Rongxing

    This paper presents a method for the generation of geometric representations of 3D objects by digital photogrammetry. In CAD/CAM systems geometric modelers are usually used to create three-dimensional (3D) geometric representations for design and manufacturing purposes. However, in cases where geometric information such as dimensions and shapes of objects are not available, measurements of physically existing objects become necessary. In this paper, geometric parameters of primitives of 3D geometric representations such as Boundary Representation (B-rep), Constructive Solid Geometry (CSG), and digital surface models are determined by digital image matching techniques. An algorithm for reconstruction of surfaces with discontinuities is developed. Interfaces between digital photogrammetric data and these geometric representations are realized. This method can be applied to design and manufacturing in mechanical engineering, automobile industry, robot technology, spatial information systems and others.

  13. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core–shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  14. High-purity 3D nano-objects grown by focused-electron-beam induced deposition.

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices. PMID:27454835

  15. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  16. 3D-Web-GIS RFID location sensing system for construction objects.

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  17. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0–35 mm s−1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s−1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue

  18. Digital Curvatures Applied to 3D Object Analysis and Recognition: A Case Study

    Chen, Li

    2009-01-01

    In this paper, we propose using curvatures in digital space for 3D object analysis and recognition. Since direct adjacency has only six types of digital surface points in local configurations, it is easy to determine and classify the discrete curvatures for every point on the boundary of a 3D object. Unlike the boundary simplicial decomposition (triangulation), the curvature can take any real value. It sometimes makes difficulties to find a right value for threshold. This paper focuses on the global properties of categorizing curvatures for small regions. We use both digital Gaussian curvatures and digital mean curvatures to 3D shapes. This paper proposes a multi-scale method for 3D object analysis and a vector method for 3D similarity classification. We use these methods for face recognition and shape classification. We have found that the Gaussian curvatures mainly describe the global features and average characteristics such as the five regions of a human face. However, mean curvatures can be used to find ...

  19. 3D high-efficiency video coding for multi-view video and depth data.

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  20. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  1. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  2. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  3. Fast error simulation of optical 3D measurements at translucent objects

    Lutzke, P.; Kühmstedt, P.; Notni, G.

    2012-09-01

    The scan results of optical 3D measurements at translucent objects deviate from the real objects surface. This error is caused by the fact that light is scattered in the objects volume and is not exclusively reflected at its surface. A few approaches were made to separate the surface reflected light from the volume scattered. For smooth objects the surface reflected light is dominantly concentrated in specular direction and could only be observed from a point in this direction. Thus the separation either leads to measurement results only creating data for near specular directions or provides data from not well separated areas. To ensure the flexibility and precision of optical 3D measurement systems for translucent materials it is necessary to enhance the understanding of the error forming process. For this purpose a technique for simulating the 3D measurement at translucent objects is presented. A simple error model is shortly outlined and extended to an efficient simulation environment based upon ordinary raytracing methods. In comparison the results of a Monte-Carlo simulation are presented. Only a few material and object parameters are needed for the raytracing simulation approach. The attempt of in-system collection of these material and object specific parameters is illustrated. The main concept of developing an error-compensation method based on the simulation environment and the collected parameters is described. The complete procedure is using both, the surface reflected and the volume scattered light for further processing.

  4. Printing of metallic 3D micro-objects by laser induced forward transfer.

    Zenou, Michael; Kotler, Zvi

    2016-01-25

    Digital printing of 3D metal micro-structures by laser induced forward transfer under ambient conditions is reviewed. Recent progress has allowed drop on demand transfer of molten, femto-liter, metal droplets with a high jetting directionality. Such small volume droplets solidify instantly, on a nanosecond time scale, as they touch the substrate. This fast solidification limits their lateral spreading and allows the fabrication of high aspect ratio and complex 3D metal structures. Several examples of micron-scale resolution metal objects printed using this method are presented and discussed. PMID:26832524

  5. Visual object tracking in 3D with color based particle filter

    Barrera González, Pablo; Matellán Olivera, Vicente; Cañas, José María

    2005-01-01

    This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle lter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precise motion model. The observation model we have developed avoids the color ltering of the entire image. That and the MonteCarlo techniques inside the part...

  6. Vision-Guided Robot Control for 3D Object Recognition and Manipulation

    S. Q. Xie; Haemmerle, E.; Cheng, Y; Gamage, P

    2008-01-01

    Research into a fully automated vision-guided robot for identifying, visualising and manipulating 3D objects with complicated shapes is still undergoing major development world wide. The current trend is toward the development of more robust, intelligent and flexible vision-guided robot systems to operate in highly dynamic environments. The theoretical basis of image plane dynamics and robust image-based robot systems capable of manipulating moving objects still need further research. Researc...

  7. Steady-state particle tracking in the object-oriented regional groundwater model ZOOMQ3D

    Jackson, C.R.

    2002-01-01

    This report describes the development of a steady-state particle tracking code for use in conjunction with the object-oriented regional groundwater flow model, ZOOMQ3D (Jackson, 2001). Like the flow model, the particle tracking software, ZOOPT, is written using an object-oriented approach to promote its extensibility and flexibility. ZOOPT enables the definition of steady-state pathlines in three dimensions. Particles can be tracked in both the forward and reverse directions en...

  8. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  9. The potential of 3D techniques for cultural heritage object documentation

    Bitelli, Gabriele; Girelli, Valentina A.; Remondino, Fabio; Vittuari, Luca

    2007-01-01

    The generation of 3D models of objects has become an important research point in many fields of application like industrial inspection, robotics, navigation and body scanning. Recently the techniques for generating photo-textured 3D digital models have interested also the field of Cultural Heritage, due to their capability to combine high precision metrical information with a qualitative and photographic description of the objects. In fact this kind of product is a fundamental support for documentation, studying and restoration of works of art, until a production of replicas by fast prototyping techniques. Close-range photogrammetric techniques are nowadays more and more frequently used for the generation of precise 3D models. With the advent of automated procedures and fully digital products in the 1990s, it has become easier to use and cheaper, and nowadays a wide range of commercial software is available to calibrate, orient and reconstruct objects from images. This paper presents the complete process for the derivation of a photorealistic 3D model of an important basalt stela (about 70 x 60 x 25 cm) discovered in the archaeological site of Tilmen Höyük, in Turkey, dating back to 2nd mill. BC. We will report the modeling performed using passive and active sensors and the comparison of the achieved results.

  10. Object-shape recognition and 3D reconstruction from tactile sensor images.

    Khasnobish, Anwesha; Singh, Garima; Jati, Arindam; Konar, Amit; Tibarewala, D N

    2014-04-01

    This article presents a novel approach of edged and edgeless object-shape recognition and 3D reconstruction from gradient-based analysis of tactile images. We recognize an object's shape by visualizing a surface topology in our mind while grasping the object in our palm and also taking help from our past experience of exploring similar kind of objects. The proposed hybrid recognition strategy works in similar way in two stages. In the first stage, conventional object-shape recognition using linear support vector machine classifier is performed where regional descriptors features have been extracted from the tactile image. A 3D shape reconstruction is also performed depending upon the edged or edgeless objects classified from the tactile images. In the second stage, the hybrid recognition scheme utilizes the feature set comprising both the previously obtained regional descriptors features and some gradient-related information from the reconstructed object-shape image for the final recognition in corresponding four classes of objects viz. planar, one-edged object, two-edged object and cylindrical objects. The hybrid strategy achieves 97.62 % classification accuracy, while the conventional recognition scheme reaches only to 92.60 %. Moreover, the proposed algorithm has been proved to be less noise prone and more statistically robust. PMID:24469960

  11. On 3D simulation of moving objects in a digital earth system

    2008-01-01

    "How do the rescue helicopters find out an optimized path to arrive at the site of a disaster as soon as possible?" or "How are the flight procedures over mountains and plateaus simulated?" and so on.In this paper a script language on spatial moving objects is presented by abstracting 3D spatial moving objects’ behavior when implementing moving objects simulation in 3D digital Earth scene,which is based on a platform of digital China named "ChinaStar".The definition of this script language,its morphology and syntax,its compiling and mediate language generating,and the behavior and state control of spatial moving objects are discussed emphatically.In addition,the language’s applications and implementation are also discussed.

  12. Full-viewpoint 3D Space Object Recognition Based on Kernel Locality Preserving Projections

    Meng Gang; Jiang Zhiguo; Liu Zhengyi; Zhang Haopeng; Zhao Danpei

    2010-01-01

    Space object recognition plays an important role in spatial exploitation and surveillance,followed by two main problems:lacking of data and drastic changes in viewpoints.In this article,firstly,we build a three-dimensional (3D) satellites dataset named BUAA Satellite Image Dataset (BUAA-SID 1.0) to supply data for 3D space object research.Then,based on the dataset,we propose to recognize full-viewpoint 3D space objects based on kemel locality preserving projections (KLPP).To obtain more accurate and separable description of the objects,firstly,we build feature vectors employing moment invariants,Fourier descriptors,region covariance and histogram of oriented gradients.Then,we map the features into kernel space followed by dimensionality reduction using KLPP to obtain the submanifold of the features.At last,k-nearest neighbor (kNN) is used to accomplish the classification.Experimental results show that the proposed approach is more appropriate for space object recognition mainly considering changes of viewpoints.Encouraging recognition rate could be obtained based on images in BUAA-SID 1.0,and the highest recognition result could achieve 95.87%.

  13. From 2D Silhouettes to 3D Object Retrieval: Contributions and Benchmarking

    Napoléon Thibault

    2010-01-01

    Full Text Available 3D retrieval has recently emerged as an important boost for 2D search techniques. This is mainly due to its several complementary aspects, for instance, enriching views in 2D image datasets, overcoming occlusion and serving in many real-world applications such as photography, art, archeology, and geolocalization. In this paper, we introduce a complete "2D photography to 3D object" retrieval framework. Given a (collection of picture(s or sketch(es of the same scene or object, the method allows us to retrieve the underlying similar objects in a database of 3D models. The contribution of our method includes (i a generative approach for alignment able to find canonical views consistently through scenes/objects and (ii the application of an efficient but effective matching method used for ranking. The results are reported through the Princeton Shape Benchmark and the Shrec benchmarking consortium evaluated/compared by a third party. In the two gallery sets, our framework achieves very encouraging performance and outperforms the other runs.

  14. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  15. Towards a Stable Robotic Object Manipulation Through 2D-3D Features Tracking

    Sorin M. Grigorescu

    2013-04-01

    Full Text Available In this paper, a new object tracking system is proposed to improve the object manipulation capabilities of service robots. The goal is to continuously track the state of the visualized environment in order to send visual information in real time to the path planning and decision modules of the robot; that is, to adapt the movement of the robotic system according to the state variations appearing in the imaged scene. The tracking approach is based on a probabilistic collaborative tracking framework developed around a 2D patch‐based tracking system and a 2D‐3D point features tracker. The real‐time visual information is composed of RGB‐D data streams acquired from state‐of‐the‐art structured light sensors. For performance evaluation, the accuracy of the developed tracker is compared to a traditional marker‐based tracking system which delivers 3D information with respect to the position of the marker.

  16. Computation of Edge-Edge-Edge Events Based on Conicoid Theory for 3-D Object Recognition

    WU Chenye; MA Huimin

    2009-01-01

    The availability of a good viewpoint space partition is crucial in three dimensional (3-D) object rec-ognition on the approach of aspect graph. There are two important events depicted by the aspect graph ap-proach, edge-edge-edge (EEE) events and edge-vertex (EV) events. This paper presents an algorithm to compute EEE events by characteristic analysis based on conicoid theory, in contrast to current algorithms that focus too much on EV events and often overlook the importance of EEE events. Also, the paper provides a standard flowchart for the viewpoint space partitioning based on aspect graph theory that makes it suitable for perspective models. The partitioning result best demonstrates the algorithm's efficiency with more valu-able viewpoints found with the help of EEE events, which can definitely help to achieve high recognition rate for 3-D object recognition.

  17. Local shape feature fusion for improved matching, pose estimation and 3D object recognition

    Buch, Anders Glent; Petersen, Henrik Gordon; Krüger, Norbert

    2016-01-01

    We provide new insights to the problem of shape feature description and matching, techniques that are often applied within 3D object recognition pipelines. We subject several state of the art features to systematic evaluations based on multiple datasets from different sources in a uniform manner...... several feature matches with a limited processing overhead. Our fused feature matches provide a significant increase in matching accuracy, which is consistent over all tested datasets. Finally, we benchmark all features in a 3D object recognition setting, providing further evidence of the advantage of....... We have carefully prepared and performed a neutral test on the datasets for which the descriptors have shown good recognition performance. Our results expose an important fallacy of previous results, namely that the performance of the recognition system does not correlate well with the performance of...

  18. Architectural Reconstruction of 3D Building Objects through Semantic Knowledge Management

    Yucong, Duan; Cruz, Christophe; Nicolle, Christophe

    2010-01-01

    International audience This paper presents an ongoing research which aims at combining geometrical analysis of point clouds and semantic rules to detect 3D building objects. Firstly by applying a previous semantic formalization investigation, we propose a classification of related knowledge as definition, partial knowledge and ambiguous knowledge to facilitate the understanding and design. Secondly an empirical implementation is conducted on a simplified building prototype complying with t...

  19. Interactive Application Development Policy Object 3D Virtual Tour History Pacitan District based Multimedia

    Bambang Eka Purnama; Lies Yulianto; Muga Linggar Famukhit; Maryono

    2013-01-01

    Pacitan has a wide range of tourism activity. One of the tourism district is Pacitan historical attractions. These objects have a history tour of the educational values, history and culture, which must be maintained and preserved as one tourism asset Kabupeten Pacitan. But the history of the current tour the rarely visited and some of the students also do not understand the history of each of these historical attractions. Hence made a information media of 3D virtual interactive applications P...

  20. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  1. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  2. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  3. A Global Hypothesis Verification Framework for 3D Object Recognition in Clutter.

    Aldoma, Aitor; Tombari, Federico; Stefano, Luigi Di; Vincze, Markus

    2016-07-01

    Pipelines to recognize 3D objects despite clutter and occlusions usually end up with a final verification stage whereby recognition hypotheses are validated or dismissed based on how well they explain sensor measurements. Unlike previous work, we propose a Global Hypothesis Verification (GHV) approach which regards all hypotheses jointly so as to account for mutual interactions. GHV provides a principled framework to tackle the complexity of our visual world by leveraging on a plurality of recognition paradigms and cues. Accordingly, we present a 3D object recognition pipeline deploying both global and local 3D features as well as shape and color. Thereby, and facilitated by the robustness of the verification process, diverse object hypotheses can be gathered and weak hypotheses need not be suppressed too early to trade sensitivity for specificity. Experiments demonstrate the effectiveness of our proposal, which significantly improves over the state-of-art and attains ideal performance (no false negatives, no false positives) on three out of the six most relevant and challenging benchmark datasets. PMID:26485476

  4. A methodology for 3D modeling and visualization of geological objects

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to organize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and it is a good way to solve the problem of integration and share of geological spatial data.

  5. A methodology for 3D modeling and visualization of geological objects

    ZHANG LiQiang; TAN YuMin; KANG ZhiZhong; RUI XiaoPing; ZHAO YuanYuan; LIU Liu

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to or- ganize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and It is a good way to solve the problem of integration and share of geological spatial data.

  6. Color and size interactions in a real 3D object similarity task.

    Ling, Yazhu; Hurlbert, Anya

    2004-08-31

    In the natural world, objects are characterized by a variety of attributes, including color and shape. The contributions of these two attributes to object recognition are typically studied independently of each other, yet they are likely to interact in natural tasks. Here we examine whether color and size (a component of shape) interact in a real three-dimensional (3D) object similarity task, using solid domelike objects whose distinct apparent surface colors are independently controlled via spatially restricted illumination from a data projector hidden to the observer. The novel experimental setup preserves natural cues to 3D shape from shading, binocular disparity, motion parallax, and surface texture cues, while also providing the flexibility and ease of computer control. Observers performed three distinct tasks: two unimodal discrimination tasks, and an object similarity task. Depending on the task, the observer was instructed to select the indicated alternative object which was "bigger than," "the same color as," or "most similar to" the designated reference object, all of which varied in both size and color between trials. For both unimodal discrimination tasks, discrimination thresholds for the tested attribute (e.g., color) were increased by differences in the secondary attribute (e.g., size), although this effect was more robust in the color task. For the unimodal size-discrimination task, the strongest effects of the secondary attribute (color) occurred as a perceptual bias, which we call the "saturation-size effect": Objects with more saturated colors appear larger than objects with less saturated colors. In the object similarity task, discrimination thresholds for color or size differences were significantly larger than in the unimodal discrimination tasks. We conclude that color and size interact in determining object similarity, and are effectively analyzed on a coarser scale, due to noise in the similarity estimates of the individual attributes

  7. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automati...

  8. Development of a system for 3D reconstruction of objects using passive computer vision methods

    Gec, Sandi

    2015-01-01

    The main goal of the master thesis is to develop a system for reconstruction of 3D objects from colour images. The main focus is on passive computer vision methods from which we select two, i.e., Stereo vision and Space carving. Both methods require information about camera poses. The camera pose for a given image is estimated from the information obtained by detecting a reference object, i.e., a standard A4 paper sheet. We develop an Android based mobile application to guide a user during im...

  9. 3D high- and super-resolution imaging using single-objective SPIM.

    Galland, Remi; Grenci, Gianluca; Aravind, Ajay; Viasnoff, Virgile; Studer, Vincent; Sibarita, Jean-Baptiste

    2015-07-01

    Single-objective selective-plane illumination microscopy (soSPIM) is achieved with micromirrored cavities combined with a laser beam-steering unit installed on a standard inverted microscope. The illumination and detection are done through the same objective. soSPIM can be used with standard sample preparations and features high background rejection and efficient photon collection, allowing for 3D single-molecule-based super-resolution imaging of whole cells or cell aggregates. Using larger mirrors enabled us to broaden the capabilities of our system to image Drosophila embryos. PMID:25961414

  10. Creating of 3D map of temperature fields OKR at depths of around 1000 m

    Kajzar, Vlastimil; Pavelek, Z.

    Vol. 5. Ostrava: Ústav geoniky AV ČR, 2014 - (Koníček, P.; Souček, K.; Heroldová, N.). s. 91-92 ISBN 978-80-86407-49-4. [5th International Colloquium on Geomechanics and Geophysics. 24.06.2014-27.06.2014, Ostravice, Karolínka] Institutional support: RVO:68145535 Keywords : temperature field * rock massif * OKR * exploration * 3D map Subject RIV: DH - Mining, incl. Coal Mining

  11. Indoor 3D Video Monitoring Using Multiple Kinect Depth-Cameras

    M. Martínez-Zarzuela

    2014-02-01

    Full Text Available This article describes the design and development of a system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in selfcoding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to merge the skeletons detected locally by each Kinect conveniently, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  12. Fast and flexible 3D object recognition solutions for machine vision applications

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  13. Volumetric Next-best-view Planning for 3D Object Reconstruction with Positioning Error

    J. Irving Vasquez-Gomez

    2014-10-01

    Full Text Available Three-dimensional (3D object reconstruction is the process of building a 3D model of a real object. This task is performed by taking several scans of an object from different locations (views. Due to the limited field of view of the sensor and the object’s self-occlusions, it is a difficult problem to solve. In addition, sensor positioning by robots is not perfect, making the actual view different from the expected one. We propose a next best view (NBV algorithm that determines each view to reconstruct an arbitrary object. Furthermore, we propose a method to deal with the uncertainty in sensor positioning. The algorithm fulfills all the constraints of a reconstruction process, such as new information, positioning constraints, sensing constraints and registration constraints. Moreover, it improves the scan’s quality and reduces the navigation distance. The algorithm is based on a search-based paradigm where a set of candidate views is generated and then each candidate view is evaluated to determine which one is the best. To deal with positioning uncertainty, we propose a second stage which re-evaluates the views according to their neighbours, such that the best view is that which is within a region of the good views. The results of simulation and comparisons with previous approaches are presented.

  14. Recognition of 3-D objects based on Markov random field models

    HUANG Ying; DING Xiao-qing; WANG Sheng-jin

    2006-01-01

    The recognition of 3-D objects is quite a difficult task for computer vision systems.This paper presents a new object framework,which utilizes densely sampled grids with different resolutions to represent the local information of the input image.A Markov random field model is then created to model the geometric distribution of the object key nodes.Flexible matching,which aims to find the accurate correspondence map between the key points of two images,is performed by combining the local similarities and the geometric relations together using the highest confidence first method.Afterwards,a global similarity is calculated for object recognition. Experimental results on Coil-100 object database,which consists of 7 200 images of 100 objects,are presented.When the numbers of templates vary from 4,8,18 to 36 for each object,and the remaining images compose the test sets,the object recognition rates are 95.75 %,99.30 %,100.0 % and 100.0 %,respectively.The excellent recognition performance is much better than those of the other cited references,which indicates that our approach is well-suited for appearance-based object recognition.

  15. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  16. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  17. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  18. Polarizablity of 2D and 3D conducting objects using method of moments

    Shahpari, Morteza; Lewis, Andrew

    2014-01-01

    Fundamental antenna limits of the gain-bandwidth product are derived from polarizability calculations. This electrostatic technique has significant value in many antenna evaluations. Polarizability is not available in closed form for most antenna shapes and no commercial electromagnetic packages have this facility. Numerical computation of the polarizability for arbitrary conducting bodies was undertaken using an unstructured triangular mesh over the surface of 2D and 3D objects. Numerical results compare favourably with analytical solutions and can be implemented efficiently for large structures of arbitrary shape.

  19. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000. (paper)

  20. Calibration target reconstruction for 3-D vision inspection system of large-scale engineering objects

    Yin, Yongkai; Peng, Xiang; Guan, Yingjian; Liu, Xiaoli; Li, Ameng

    2010-11-01

    It is usually difficult to calibrate the 3-D vision inspection system that may be employed to measure the large-scale engineering objects. One of the challenges is how to in-situ build-up a large and precise calibration target. In this paper, we present a calibration target reconstruction strategy to solve such a problem. First, we choose one of the engineering objects to be inspected as a calibration target, on which we paste coded marks on the object surface. Next, we locate and decode marks to get homologous points. From multiple camera images, the fundamental matrix between adjacent images can be estimated, and then the essential matrix can be derived with priori known camera intrinsic parameters and decomposed to obtain camera extrinsic parameters. Finally, we are able to obtain the initial 3D coordinates with binocular stereo vision reconstruction, and then optimize them with the bundle adjustment by considering the lens distortions, leading to a high-precision calibration target. This reconstruction strategy has been applied to the inspection of an industrial project, from which the proposed method is successfully validated.

  1. 3D Objects Localization Using Fuzzy Approach and Hierarchical Belief Propagation: Application at Level Crossings

    Dufaux A

    2011-01-01

    Full Text Available Technological solutions for obstacle-detection systems have been proposed to prevent accidents in safety-transport applications. In order to avoid the limits of these proposed technologies, an obstacle-detection system utilizing stereo cameras is proposed to detect and localize multiple objects at level crossings. Background subtraction is first performed using the color independent component analysis technique, which has proved its performance against other well-known object-detection methods. The main contribution is the development of a robust stereo-matching algorithm which reliably localizes in 3D each segmented object. A standard stereo dataset and real-world images are used to test and evaluate the performances of the proposed algorithm to prove the efficiency and the robustness of the proposed video-surveillance system.

  2. Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens.

    Wang, Yu-Jen; Shen, Xin; Lin, Yi-Hsin; Javidi, Bahram

    2015-08-01

    Conventional synthetic-aperture integral imaging uses a lens array to sense the three-dimensional (3D) object or scene that can then be reconstructed digitally or optically. However, integral imaging generally suffers from a fixed and limited range of depth of field (DOF). In this Letter, we experimentally demonstrate a 3D integral-imaging endoscopy with tunable DOF by using a single large-aperture focal-length-tunable liquid crystal (LC) lens. The proposed system can provide high spatial resolution and an extended DOF in synthetic-aperture integral imaging 3D endoscope. In our experiments, the image plane in the integral imaging pickup process can be tuned from 18 to 38 mm continuously using a large-aperture LC lens, and the total DOF is extended from 12 to 51 mm. To the best of our knowledge, this is the first report on synthetic aperture integral imaging 3D endoscopy with a large-aperture LC lens that can provide high spatial resolution 3D imaging with an extend DOF. PMID:26258358

  3. 3D Skeleton model derived from Kinect Depth Sensor Camera and its application to walking style quality evaluations

    Kohei Arai

    2013-07-01

    Full Text Available Feature extraction for gait recognition has been created widely. The ancestor for this task is divided into two parts, model based and free-model based. Model-based approaches obtain a set of static or dynamic skeleton parameters via modeling or tracking body components such as limbs, legs, arms and thighs. Model-free approaches focus on shapes of silhouettes or the entire movement of physical bodies. Model-free approaches are insensitive to the quality of silhouettes. Its advantage is a low computational costs comparing to model-based approaches. However, they are usually not robust to viewpoints and scale. Imaging technology also developed quickly this decades. Motion capture (mocap device integrated with motion sensor has an expensive price and can only be owned by big animation studio. Fortunately now already existed Kinect camera equipped with depth sensor image in the market with very low price compare to any mocap device. Of course the accuracy not as good as the expensive one, but using some preprocessing we can remove the jittery and noisy in the 3D skeleton points. Our proposed method is part of model based feature extraction and we call it 3D Skeleton model. 3D skeleton model for extracting gait itself is a new model style considering all the previous model is using 2D skeleton model. The advantages itself is getting accurate coordinate of 3D point for each skeleton model rather than only 2D point. We use Kinect to get the depth data. We use Ipisoft mocap software to extract 3d skeleton model from Kinect video. From the experimental results shows 86.36% correctly classified instances using SVM.

  4. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used

  5. Lapse-time dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-07-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: firstly, we evaluate the contribution of surface and body wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Secondly, we compare the lapse-time behavior in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  6. Thickness and clearance visualization based on distance field of 3D objects

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  7. Fabrication of 3D Templates Using a Large Depth of Focus Femtosecond Laser

    Li, Xiao-Fan; Winfield, Richard; O'Brien, Shane; Chen, Liang-Yao

    2009-09-01

    We report the use of a large depth of focus Bessel beam in the fabrication of cell structures. Two axicon lenses are investigated in the formation of high aspect ratio line structures. A sol-gel resin, with good mechanical strength, is polymerised in a modified two-photon polymerisation system. Examples of different two-dimensional grids are presented to show that the lateral resolution can be maintained even in the rapid fabrication of high-sided structures.

  8. Fabrication of 3D Templates Using a Large Depth of Focus Femtosecond Laser

    We report the use of a large depth of focus Bessel beam in the fabrication of cell structures. Two axicon lenses are investigated in the formation of high aspect ratio line structures. A sol-gel resin, with good mechanical strength, is polymerised in a modified two-photon polymerisation system. Examples of different two-dimensional grids are presented to show that the lateral resolution can be maintained even in the rapid fabrication of high-sided structures

  9. Active learning in the lecture theatre using 3D printed objects.

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  10. Active learning in the lecture theatre using 3D printed objects [version 2; referees: 2 approved

    David P. Smith

    2016-06-01

    Full Text Available The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student.

  11. Fully integrated system-on-chip for pixel-based 3D depth and scene mapping

    Popp, Martin; De Coi, Beat; Thalmann, Markus; Gancarz, Radoslav; Ferrat, Pascal; Dürmüller, Martin; Britt, Florian; Annese, Marco; Ledergerber, Markus; Catregn, Gion-Pol

    2012-03-01

    We present for the first time a fully integrated system-on-chip (SoC) for pixel-based 3D range detection suited for commercial applications. It is based on the time-of-flight (ToF) principle, i.e. measuring the phase difference of a reflected pulse train. The product epc600 is fabricated using a dedicated process flow, called Espros Photonic CMOS. This integration makes it possible to achieve a Quantum Efficiency (QE) of >80% in the full wavelength band from 520nm up to 900nm as well as very high timing precision in the sub-ns range which is needed for exact detection of the phase delay. The SoC features 8x8 pixels and includes all necessary sub-components such as ToF pixel array, voltage generation and regulation, non-volatile memory for configuration, LED driver for active illumination, digital SPI interface for easy communication, column based 12bit ADC converters, PLL and digital data processing with temporary data storage. The system can be operated at up to 100 frames per second.

  12. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Péter; Baumbach, Tilo; Cloetens, Peter

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies. PMID:23185554

  13. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    Feng Xu

    Full Text Available Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies.

  14. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  15. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  16. Electromagnetic 3D subsurface imaging with source sparsity for a synthetic object

    Pursiainen, Sampsa

    2016-01-01

    This paper concerns electromagnetic 3D subsurface imaging in connection with sparsity of signal sources. We explored an imaging approach that can be implemented in situations that allow obtaining a large amount of data over a surface or a set of orbits but at the same time require sparsity of the signal sources. Characteristic to such a tomography scenario is that it necessitates the inversion technique to be genuinely three-dimensional: For example, slicing is not possible due to the low number of sources. Here, we primarily focused on astrophysical subsurface exploration purposes. As an example target of our numerical experiments we used a synthetic small planetary object containing three inclusions, e.g. voids, of the size of the wavelength. A tetrahedral arrangement of source positions was used, it being the simplest symmetric point configuration in 3D. Our results suggest that somewhat reliable inversion results can be produced within the present a priori assumptions, if the data can be recorded at a spe...

  17. Interactive Application Development Policy Object 3D Virtual Tour History Pacitan District based Multimedia

    Bambang Eka Purnama

    2013-04-01

    Full Text Available Pacitan has a wide range of tourism activity. One of the tourism district is Pacitan historical attractions. These objects have a history tour of the educational values, history and culture, which must be maintained and preserved as one tourism asset Kabupeten Pacitan. But the history of the current tour the rarely visited and some of the students also do not understand the history of each of these historical attractions. Hence made a information media of 3D virtual interactive applications Pacitan tour history in the form of interactive CD applications. The purpose of the creation of interactive applications is to introduce Pacitan history tours to students and the community. Creating interactive information media that can provide an overview of the history of the existing tourist sites in Pacitan The benefits of this research is the students and the public will get to know the history of historical attractions Pacitan. As a media introduction of historical attractions and as a medium of information to preserve the historical sights. Band is used in the manufacturing methods Applications 3D Virtual Interactive Attractions: History-Based Multimedia Pacitan authors used the method library, observation and interviews. Design using 3ds Max 2010, Adobe Director 11.5, Adobe Photoshop CS3 and Corel Draw. The results of this research is the creation of media interakif information that can provide knowledge about the history of Pacitan.

  18. An alternative 3D inversion method for magnetic anomalies with depth resolution

    M. Chiappini

    2006-06-01

    Full Text Available This paper presents a new method to invert magnetic anomaly data in a variety of non-complex contexts when a priori information about the sources is not available. The region containing magnetic sources is discretized into a set of homogeneously magnetized rectangular prisms, polarized along a common direction. The magnetization distribution is calculated by solving an underdetermined linear system, and is accomplished through the simultaneous minimization of the norm of the solution and the misfit between the observed and the calculated field. Our algorithm makes use of a dipolar approximation to compute the magnetic field of the rectangular blocks. We show how this approximation, in conjunction with other correction factors, presents numerous advantages in terms of computing speed and depth resolution, and does not affect significantly the success of the inversion. The algorithm is tested on both synthetic and real magnetic datasets.

  19. WAVES GENERATED BY A 3D MOVING BODY IN A TWO-LAYER FLUID OF FINITE DEPTH

    ZHU Wei; YOU Yun-xiang; MIAO Guo-ping; ZHAO Feng; ZHANG Jun

    2005-01-01

    This paper is concerned with the waves generated by a 3-D body advancing beneath the free surface with constant speed in a two-layer fluid of finite depth. By applying Green's theorem, a layered integral equation system based on the Rankine source for the perturbed velocity potential generated by the moving body was derived with the potential flow theory. A four-node isoparametric element method was used to treat with the solution of the layered integral equation system. The surface and interface waves generated by a moving ball were calculated numerically. The results were compared with the analytical results for a moving source with constant velocity.

  20. An overview of 3D topology for LADM-based objects

    Zulkifli, N.A.; Rahman, A.A.; Van Oosterom, P.J.M.

    2015-01-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological mod

  1. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate th...

  2. A 3D approach for object recognition in illuminated scenes with adaptive correlation filters

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2015-09-01

    In this paper we solve the problem of pose recognition of a 3D object in non-uniformly illuminated and noisy scenes. The recognition system employs a bank of space-variant correlation filters constructed with an adaptive approach based on local statistical parameters of the input scene. The position and orientation of the target are estimated with the help of the filter bank. For an observed input frame, the algorithm computes the correlation process between the observed image and the bank of filters using a combination of data and task parallelism by taking advantage of a graphics processing unit (GPU) architecture. The pose of the target is estimated by finding the template that better matches the current view of target within the scene. The performance of the proposed system is evaluated in terms of recognition accuracy, location and orientation errors, and computational performance.

  3. Ball-scale based hierarchical multi-object recognition in 3D medical images

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  4. Depth-kymography of vocal fold vibrations: part II. Simulations and direct comparisons with 3D profile measurements

    Mul, Frits F M de; George, Nibu A; Qiu Qingjun; Rakhorst, Gerhard; Schutte, Harm K [Department of Biomedical Engineering BMSA, Faculty of Medicine, University Medical Center Groningen UMCG, University of Groningen, PO Box 196, 9700 AD Groningen (Netherlands)], E-mail: ffm@demul.net

    2009-07-07

    We report novel direct quantitative comparisons between 3D profiling measurements and simulations of human vocal fold vibrations. Until now, in human vocal folds research, only imaging in a horizontal plane was possible. However, for the investigation of several diseases, depth information is needed, especially when the two folds act differently, e.g. in the case of tumour growth. Recently, with our novel depth-kymographic laryngoscope, we obtained calibrated data about the horizontal and vertical positions of the visible surface of the vibrating vocal folds. In order to find relations with physical parameters such as elasticity and damping constants, we numerically simulated the horizontal and vertical positions and movements of the human vocal folds while vibrating and investigated the effect of varying several parameters on the characteristics of the phonation: the masses and their dimensions, the respective forces and pressures, and the details of the vocal tract compartments. Direct one-to-one comparison with measured 3D positions presents-for the first time-a direct means of validation of these calculations. This may start a new field in vocal folds research.

  5. Depth-kymography of vocal fold vibrations: part II. Simulations and direct comparisons with 3D profile measurements

    We report novel direct quantitative comparisons between 3D profiling measurements and simulations of human vocal fold vibrations. Until now, in human vocal folds research, only imaging in a horizontal plane was possible. However, for the investigation of several diseases, depth information is needed, especially when the two folds act differently, e.g. in the case of tumour growth. Recently, with our novel depth-kymographic laryngoscope, we obtained calibrated data about the horizontal and vertical positions of the visible surface of the vibrating vocal folds. In order to find relations with physical parameters such as elasticity and damping constants, we numerically simulated the horizontal and vertical positions and movements of the human vocal folds while vibrating and investigated the effect of varying several parameters on the characteristics of the phonation: the masses and their dimensions, the respective forces and pressures, and the details of the vocal tract compartments. Direct one-to-one comparison with measured 3D positions presents-for the first time-a direct means of validation of these calculations. This may start a new field in vocal folds research.

  6. Development of confocal 3D X-ray fluorescence instrument and its applications to micro depth profiling

    We have developed a confocal micro X-ray fluorescence instrument. Two independent X-ray tubes of Mo and Cr targets were installed to this instrument. Two polycapillary full X-ray lenses were attached to two X-ray tubes, and a polycapillary half X-ray lens was also attached to the X-ray detector (silicon drift detector, SDD). Finally, three focus spots of three lenses were adjusted at a common position. By using this confocal micro X-ray fluorescence instrument, depth profiling for layered samples were performed. It was found that depth resolution depended on energy of X-ray fluorescence that was measured. In addition, X-ray elemental maps were determined at different depths for an agar sample including metal fragments of Cu, Ti and Au. The elemental maps showed actual distributions of metal fragments in the agar, indicating that the confocal micro X-ray fluorescence is a feasible technique for non-destructive depth analysis and 3D X-ray fluorescence analysis. (author)

  7. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    Shouyi Yin; Hao Dong; Guangli Jiang; Leibo Liu; Shaojun Wei

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor....

  8. 3D Micro-PIXE at atmospheric pressure: A new tool for the investigation of art and archaeological objects

    The paper describes a novel experiment characterized by the development of a confocal geometry in an external Micro-PIXE set-up. The position of X-ray optics in front of the X-ray detector and its proper alignment with respect to the proton micro-beam focus provided the possibility of carrying out 3D Micro-PIXE analysis. As a first application, depth intensity profiles of the major elements that compose the patina layer of a quaternary bronze alloy were measured. A simulation approach of the 3D Micro-PIXE data deduced elemental concentration profiles in rather good agreement with corresponding results obtained by electron probe micro-analysis from a cross-sectioned patina sample. With its non-destructive and depth-resolving properties, as well as its feasibility in atmospheric pressure, 3D Micro-PIXE seems especially suited for investigations in the field of cultural heritage

  9. Software for Building Models of 3D Objects via the Internet

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  10. mEdgeBoxes: objectness estimation for depth image

    Fang, Zhiwen; Cao, Zhiguo; Xiao, Yang; Zhu, Lei; Lu, Hao

    2015-12-01

    Object detection is one of the most important researches in computer vision. Recently, category-independent objectness in RGB images has been a hot field for its generalization ability and efficiency as a pre-filtering procedure of the object detection. Many traditional applications have been transferred from the RGB images to the depth images since the economical depth sensors, such as Kinect, were popularized. The depth data represents the distance information. Because of the special characteristic, the methods of objectness evaluation in RGB images are often invalid in depth images. In this study, we propose mEdgeboxes to evaluate the objectness in depth image. Aside from detecting the edge from the raw depth information, we extract another edge map from the orientation information based on the normal vector. Two kinds of the edge map are integrated and are fed to Edgeboxes1 in order to produce the object proposals. The experimental results on two challenging datasets demonstrate that the detection rate of the proposed objectness estimation method can achieve over 90% with 1000 windows. It is worth noting that our approach generally outperforms the state-of-the-art methods on the detection rate.

  11. Nanometer depth resolution in 3D topographic analysis of drug-loaded nanofibrous mats without sample preparation.

    Paaver, Urve; Heinämäki, Jyrki; Kassamakov, Ivan; Hæggström, Edward; Ylitalo, Tuomo; Nolvi, Anton; Kozlova, Jekaterina; Laidmäe, Ivo; Kogermann, Karin; Veski, Peep

    2014-02-28

    We showed that scanning white light interferometry (SWLI) can provide nanometer depth resolution in 3D topographic analysis of electrospun drug-loaded nanofibrous mats without sample preparation. The method permits rapidly investigating geometric properties (e.g. fiber diameter, orientation and morphology) and surface topography of drug-loaded nanofibers and nanomats. Electrospun nanofibers of a model drug, piroxicam (PRX), and hydroxypropyl methylcellulose (HPMC) were imaged. Scanning electron microscopy (SEM) served as a reference method. SWLI 3D images featuring 29 nm by 29 nm active pixel size were obtained of a 55 μm × 40 μm area. The thickness of the drug-loaded non-woven nanomats was uniform, ranging from 2.0 μm to 3.0 μm (SWLI), and independent of the ratio between HPMC and PRX. The average diameters (n=100, SEM) for drug-loaded nanofibers were 387 ± 125 nm (HPMC and PRX 1:1), 407 ± 144 nm (HPMC and PRX 1:2), and 290 ± 100 nm (HPMC and PRX 1:4). We found advantages and limitations in both techniques. SWLI permits rapid non-contacting and non-destructive characterization of layer orientation, layer thickness, porosity, and surface morphology of electrospun drug-loaded nanofibers and nanomats. Such analysis is important because the surface topography affects the performance of nanomats in pharmaceutical and biomedical applications. PMID:24378328

  12. 3D models automatic reconstruction of selected close range objects. (Polish Title: Automatyczna rekonstrukcja modeli 3D małych obiektów bliskiego zasiegu)

    Zaweiska, D.

    2013-12-01

    Reconstruction of three-dimensional, realistic models of objects from digital images has been the topic of research in many areas of science for many years. This development is stimulated by new technologies and tools, which appeared recently, such as digital photography, laser scanners, increase in the equipment efficiency and Internet. The objective of this paper is to present results of automatic modeling of selected close range objects, with the use of digital photographs acquired by the Hasselblad H4D50 camera. The author's software tool was utilized for calculations; it performs successive stages of the 3D model creation. The modeling process was presented as the complete process which starts from acquisition of images and which is completed by creation of a photorealistic 3D model in the same software environment. Experiments were performed for selected close range objects, with appropriately arranged image geometry, creating a ring around the measured object. The Area Base Matching (CC/LSM) method, the RANSAC algorithm, with the use of tensor calculus, were utilized form automatic matching of points detected with the SUSAN algorithm. Reconstruction of the surface of model generation is one of the important stages of 3D modeling. Reconstruction of precise surfaces, performed on the basis of a non-organized cloud of points, acquired from automatic processing of digital images, is a difficult task, which has not been finally solved. Creation of poly-angular models, which may meet high requirements concerning modeling and visualization is required in many applications. The polynomial method is usually the best way to precise representation of measurement results, and, at the same time, to achieving the optimum description of the surface. Three algorithm were tested: the volumetric method (VCG), the Poisson method and the Ball pivoting method. Those methods are mostly applied to modeling of uniform grids of points. Results of experiments proved that incorrect

  13. 3D Visualization System for Tracking and Identification of Objects Project

    National Aeronautics and Space Administration — Photon-X has developed a proprietary EO spatial phase technology that can passively collect 3-D images in real-time using a single camera-based system. This...

  14. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-01

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique. PMID:25966320

  15. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor v...

  16. Design of an object oriented and modular architecture for a naval tactical simulator using Delta3D's game manager

    Toledo-Ramirez, Rommel

    2006-01-01

    The author proposes an architecture based on the Dynamic Actor Layer and the Game Manager in Delta3D to create a Networked Virtual Environment which could be used to train Navy Officers in tactics, allowing team training and doctrine rehearsal. The developed architecture is based on Object Oriented and Modular Design principles, while it explores the flexibility and strength of the Game Manager features in Delta3D game engine. The implementation of the proposed architecture is planned to be...

  17. Accurate object tracking system by integrating texture and depth cues

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  18. Digitising 3D surfaces of museum objects using photometric stereo-device

    Valach, Jaroslav; Vrba, David; Fíla, Tomáš; Bryscejn, Jan; Vavřík, Daniel

    Vol. 1. Dortmund: The LWL Industrial Museum Zeche Zollern, 2014 - (Bentkowska-Kafel, A.; Murphy, O.) ISSN 2409-9503. [From low-cost to high-tech. 3D-documentation in archaeology and monument preservation. Dortmund (DE), 16.10.2013-18.10.2013] R&D Projects: GA MK(CZ) DF11P01OVV001 Keywords : cultural heritage * 3D modelling * photometric stereo * surface topography documentation Subject RIV: AL - Art, Architecture, Cultural Heritage http://cosch.info/documents/10179/108557/2013_Denkmaeler+3D_Valach_Vrba_Fila+et+al.pdf/d7cf0a61-ddf4-41f4-a6d7-24fa172529c5

  19. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  20. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  1. Learning to Grasp Unknown Objects Based on 3D Edge Information

    Bodenhagen, Leon; Kraft, Dirk; Popovic, Mila;

    2010-01-01

    In this work we refine an initial grasping behavior based on 3D edge information by learning. Based on a set of autonomously generated evaluated grasps and relations between the semi-global 3D edges, a prediction function is learned that computes a likelihood for the success of a grasp using either...... an offline or an online learning scheme. Both methods are implemented using a hybrid artificial neural network containing standard nodes with a sigmoid activation function and nodes with a radial basis function. We show that a significant performance improvement can be achieved....

  2. A computational model that recovers the 3D shape of an object from a single 2D retinal representation.

    Li, Yunfeng; Pizlo, Zygmunt; Steinman, Robert M

    2009-05-01

    Human beings perceive 3D shapes veridically, but the underlying mechanisms remain unknown. The problem of producing veridical shape percepts is computationally difficult because the 3D shapes have to be recovered from 2D retinal images. This paper describes a new model, based on a regularization approach, that does this very well. It uses a new simplicity principle composed of four shape constraints: viz., symmetry, planarity, maximum compactness and minimum surface. Maximum compactness and minimum surface have never been used before. The model was tested with random symmetrical polyhedra. It recovered their 3D shapes from a single randomly-chosen 2D image. Neither learning, nor depth perception, was required. The effectiveness of the maximum compactness and the minimum surface constraints were measured by how well the aspect ratio of the 3D shapes was recovered. These constraints were effective; they recovered the aspect ratio of the 3D shapes very well. Aspect ratios recovered by the model were compared to aspect ratios adjusted by four human observers. They also adjusted aspect ratios very well. In those rare cases, in which the human observers showed large errors in adjusted aspect ratios, their errors were very similar to the errors made by the model. PMID:18621410

  3. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    Javier Contreras

    2015-11-01

    Full Text Available A MATLAB/SIMULINK software simulation model (structure and component blocks has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  4. Hands-free Evolution of 3D-printable Objects via Eye Tracking

    Cheney, Nick; Clune, Jeff; Yosinski, Jason; Lipson, Hod

    2013-01-01

    Interactive evolution has shown the potential to create amazing and complex forms in both 2-D and 3-D settings. However, the algorithm is slow and users quickly become fatigued. We propose that the use of eye tracking for interactive evolution systems will both reduce user fatigue and improve evolutionary success. We describe a systematic method for testing the hypothesis that eye tracking driven interactive evolution will be a more successful and easier-to-use design method than traditional ...

  5. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Peter; Baumbach, Tilo; Cloetens, Peter

    2014-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome...

  6. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    Feng Xu; Lukas Helfen; Heikki Suhonen; Dan Elgrabli; Sam Bayat; Péter Reischig; Tilo Baumbach; Peter Cloetens

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome...

  7. Real time moving object detection using motor signal and depth map for robot car

    Wu, Hao; Siu, Wan-Chi

    2013-12-01

    Moving object detection from a moving camera is a fundamental task in many applications. For the moving robot car vision, the background movement is 3D motion structure in nature. In this situation, the conventional moving object detection algorithm cannot be use to handle the 3D background modeling effectively and efficiently. In this paper, a novel scheme is proposed by utilizing the motor control signal and depth map obtained from a stereo camera to model the perspective transform matrix between different frames under a moving camera. In our approach, the coordinate relationship between frames during camera moving is modeled by a perspective transform matrix which is obtained by using current motor control signals and the pixel depth value. Hence, the relationship between a static background pixel and the moving foreground corresponding to the camera motion can be related by a perspective matrix. To enhance the robustness of classification, we allowed a tolerance range during the perspective transform matrix prediction and used multi-reference frames to classify the pixel on current frame. The proposed scheme has been found to be able to detect moving objects for our moving robot car efficiently. Different from conventional approaches, our method can model the moving background in 3D structure, without online model training. More importantly, the computational complexity and memory requirement are low making it possible to implement this scheme in real-time, which is even valuable for a robot vision system.

  8. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  9. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    Maurizio Muzzupappa

    2013-08-01

    Full Text Available In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.

  10. Object data mining and analysis on 3D images of high precision industrial CT

    There are some areas of interest on 3D images of the high precision industrial CT, such as defects caused during the production process. In order to take a close analysis of these areas, the image processing software Amira was used on the data of a particular work piece sample to do defects segmentation and display, defects measurement. evaluation and documentation. A data set obtained by scanning a vise sample using the lab CT system was analyzed and the results turn out to be fairly good. (authors)

  11. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  12. Eccentricity in Images of Circular and Spherical Targets and its Impact to 3D Object Reconstruction

    Luhmann, T.

    2014-06-01

    This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.

  13. 3D phase micro-object studies by means of digital holographic tomography supported by algebraic reconstruction technique

    Bilski, B. J.; Jozwicka, A.; Kujawinska, M.

    2007-09-01

    Constant development of microelements' technology requires a creation of new instruments to determine their basic physical parameters in 3D. The most efficient non-destructive method providing 3D information is tomography. In this paper we present Digital Holographic Tomography (DHT), in which input data is provided by means of Di-git- al Holography (DH). The main advantage of DH is the capability to capture several projections with a single hologram [1]. However, these projections have uneven angular distribution and their number is significantly limited. Therefore - Algebraic Reconstruction Technique (ART), where a few phase projections may be sufficient for proper 3D phase reconstruction, is implemented. The error analysis of the method and its additional limitations due to shape and dimensions of investigated object are presented. Finally, the results of ART application to DHT method are also presented on data reconstructed from numerically generated hologram of a multimode fibre.

  14. A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision

    Zwinderman, Matthijs; Rybski, Paul E.; Kootstra, Gert

    2010-01-01

    In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views...

  15. Technology of 3D map creation for 'Ukrytie' object internal premises

    The results of creation of master cells of information technology for mapping of 'Ukryttia' object internal rooms are represented according to materials of digital stereo and photogrammetric processing of shootings results. It is shown that a highly enough accuracy of mutual orientation of snapshots and recovery of separate objects of 'Ukryttia' object rooms is reached. Mean relative error in defining spatial sizes of objects made up 6%. A principle possibility of using offered technology in practical mapping of 'Ukryttia' object rooms is demonstrated. The results of map creation due to proposed technology can be presented as three-dimensional models in AutoCad system for subsequent use

  16. Model-based recognition of 3-D objects by geometric hashing technique

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  17. VIRO 3D: fast three-dimensional full-body scanning for humans and other living objects

    Stein, Norbert; Minge, Bernhard

    1998-03-01

    The development of a family of partial and whole body scanners provides a complete technology for fully three-dimensional and contact-free scans on human bodies or other living objects within seconds. This paper gives insight into the design and the functional principles of the whole body scanner VIRO 3D operating on the basis of the laser split-beam method. The arrangement of up to 24 camera/laser combinations, thus dividing the area into different camera fields and an all- around sensor configuration travelling in vertical direction allow the complete 360-degree-scan of an object within 6 - 20 seconds. Due to a special calibration process the different sensors are matched and the measured data are combined. Up to 10 million 3D measuring points with a resolution of approximately 1 mm are processed in all coordinate axes to generate a 3D model. By means of high-performance processors in combination with real-time image processing chips the image data from almost any number of sensors can be recorded and evaluated synchronously in video real-time. VIRO 3D scanning systems have already been successfully implemented in various applications and will open up new perspectives in different other fields, ranging from industry, orthopaedic medicine, plastic surgery to art and photography.

  18. Representing Objects using Global 3D Relational Features for Recognition Tasks

    Mustafa, Wail

    2015-01-01

    In robotic systems, visual interpretations of the environment compose an essential element in a variety of applications, especially those involving manipulation of objects. Interpreting the environment is often done in terms of recognition of objects using machine learning approaches. For user...... representations. For representing objects, we derive global descriptors encoding shape using viewpoint-invariant features obtained from multiple sensors observing the scene. Objects are also described using color independently. This allows for combining color and shape when it is required for the task. For more...... to initiate higher-level semantic interpretations of complex scenes. In the object category recognition task, we present a system that is capable of assigning multiple and nested categories for novel objects using a method developed for this purpose. Integrating this method with other multi-label learning...

  19. Automatic 3D Object Segmentation in Multiple Views using Volumetric Graph-Cuts

    Campbell, N. D. F.; Vogiatzis, G.; Hernández, C.; Cipolla, R.

    2007-01-01

    We propose an algorithm for automatically obtaining a segmentation of a rigid object in a sequence of images that are calibrated for camera pose and intrinsic parameters. Until recently, the best segmentation results have been obtained by interactive methods that require manual labelling of image regions. Our method requires no user input but instead relies on the camera fixating on the object of interest during the sequence. We begin by learning a model of the object is colour, from the imag...

  20. Holographic microscopy reconstruction in both object and image half spaces with undistorted 3D grid

    Verrier, Nicolas; Tessier, Gilles; Gross, Michel

    2015-01-01

    We propose a holographic microscopy reconstruction method, which propagates the hologram, in the object half space, in the vicinity of the object. The calibration yields reconstructions with an undistorted reconstruction grid i.e. with orthogonal x, y and z axis and constant pixels pitch. The method is validated with an USAF target imaged by a x60 microscope objective, whose holograms are recorded and reconstructed for different USAF locations along the longitudinal axis:-75 to +75 {\\mu}m. Since the reconstruction numerical phase mask, the reference phase curvature and MO form an afocal device, the reconstruction can be interpreted as occurring equivalently in the object or in image half space.

  1. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent;

    2014-01-01

    visual appearance. Visual appearance can be problematic due to imaging distortions, but the assumption that local shape structures are sucient to recognise objects and scenes is largely invalid in practise since objects may have similar shape, but dierent texture (e.g., grocery packages). In this work...

  2. Artificial Vision in 3D Perspective. For Object Detection On Planes, Using Points Clouds.

    Catalina Alejandra Vázquez Rodriguez

    2014-02-01

    Full Text Available In this paper, we talk about an algorithm of artificial vision for the robot Golem - II + with which to analyze the environment the robot, for the detection of planes and objects in the scene through point clouds, which were captured with kinect device, possible objects and quantity, distance and other characteristics. Subsequently the "clusters" are grouped to identify whether they are located on the same surface, in order to calculate the distance and the slope of the planes relative to the robot, and finally each object separately analyzed to see if it is possible to take them, if they are empty surfaces, may leave objects on them, long as feasible considering a distance, ignoring false positives as the walls and floor, which for these purposes are not of interest since it is not possible to place objects on the walls and floor are out of range of the robot's arms.

  3. Can Alex Edit? Nanometre scale 3D nanomechanical imaging of semiconductor structures from few nm to sub-micrometre depths

    Kolosov, Oleg; Dinelli, Franco; Robson, Alexander; Krier, Anthony; Hayne, Manus; Falko, Vladimir; Henini, M

    2015-01-01

    Multilayer structures of active semiconductor devices (1), novel memories (2) and semiconductor interconnects are becoming increasingly three-dimensional (3D) with simultaneous decrease of dimensions down to the few nanometres length scale (3). Ability to test and explore these 3D nanostructures with nanoscale resolution is vital for the optimization of their operation and improving manufacturing processes of new semiconductor devices. While electron and scanning probe microscopes (SPMs) can ...

  4. Controlled experimental study depicting moving objects in view-shared time-resolved 3D MRA.

    Mostardi, Petrice M; Haider, Clifton R; Rossman, Phillip J; Borisch, Eric A; Riederer, Stephen J

    2009-07-01

    Various methods have been used for time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of three-dimensional (3D) time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested using view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  5. Controlled Experimental Study Depicting Moving Objects in View-Shared Time-Resolved 3D MRA

    Mostardi, Petrice M.; Haider, Clifton R.; Rossman, Phillip J.; Borisch, Eric A.; Riederer, Stephen J.

    2010-01-01

    Various methods have been used for time-resolved contrast-enhanced MRA (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of 3D time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested, which use view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  6. A Method of Calculating the 3D Coordinates on a Micro Object in a Virtual Micro-Operation System

    2001-01-01

    A simple method for calculating the 3D coordinates of points on a micro object in a multi-camera system is proposed. It simplifies the algorithms used in traditional computer vision system by eliminating the calculation of the CCD ( charge coupled device)camera parameters and the relative position between cameras, and using solid geometry in the calculation procedures instead of the calculation of the complex matrixes. The algorithm was used in the research of generating a virtual magnified 3D image of a micro object to be operated in a micro operation system, and the satisfactory results were obtained. The application in a virtual tele-operation system for a dexterous mechanical gripper is under test.

  7. Tracking of Multiple objects Using 3D Scatter Plot Reconstructed by Linear Stereo Vision

    Safaa Moqqaddem

    2014-10-01

    Full Text Available This paper presents a new method for tracking objects using stereo vision with linear cameras. Edge points extracted from the stereo linear images are first matched to reconstruct points that represent the objects in the scene. To detect the objects, a clustering process based on a spectral analysis is then applied to the reconstructed points. The obtained clusters are finally tracked throughout their center of gravity using Kalman filter and a Nearest Neighbour based data association algorithm. Experimental results using real stereo linear images are shown to demonstrate the effectiveness of the proposed method for obstacle tracking in front of a vehicle.

  8. Robust 3D Objects Localization using Hierarchical Belief Propagation in Real World Environment

    Fakhfakh, N.; Khoudour, L.; El-Koursi, Em; BRUYELLE, JL; Dufaux, A.; Jacot, J.

    2010-01-01

    Technological solutions for obstacle detection systems have been proposed to prevent accidents in safety transport applications. In order to avoid the limits of these proposed technologies an obstacle detection system utilizing stereo cameras is proposed to detect and localize multiple objects at level crossings. A background subtraction module is first performed using the Color Independent Component Analysis (CICA) technique, which has proved its performance against other well-known object d...

  9. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    Frary, R.; Louie, J. [UNR; Pullammanappallil, S. [Optim; Eisses, A.

    2016-08-01

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  10. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.

  11. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  12. 3D micro-XRF for cultural heritage objects: new analysis strategies for the investigation of the Dead Sea Scrolls.

    Mantouvalou, Ioanna; Wolff, Timo; Hahn, Oliver; Rabin, Ira; Lühl, Lars; Pagels, Marcel; Malzer, Wolfgang; Kanngiesser, Birgit

    2011-08-15

    A combination of 3D micro X-ray fluorescence spectroscopy (3D micro-XRF) and micro-XRF was utilized for the investigation of a small collection of highly heterogeneous, partly degraded Dead Sea Scroll parchment samples from known excavation sites. The quantitative combination of the two techniques proves to be suitable for the identification of reliable marker elements which may be used for classification and provenance studies. With 3D micro-XRF, the three-dimensional nature, i.e. the depth-resolved elemental composition as well as density variations, of the samples was investigated and bromine could be identified as a suitable marker element. It is shown through a comparison of quantitative and semiquantitative values for the bromine content derived using both techniques that, for elements which are homogeneously distributed in the sample matrix, quantification with micro-XRF using a one-layer model is feasible. Thus, the possibility for routine provenance studies using portable micro-XRF instrumentation on a vast amount of samples, even on site, is obtained through this work. PMID:21711051

  13. Simulation-aided optimization of detector design using portable representation of 3D objects

    Use of the Standard Tessellation Language (STL) for automatic transport of CAD 1 geometry into Geant is presented. The hybrid approach of combining Geant native and STL objects is preferred. The tradeoffs between the CPU cost of the simulation and the accuracy of tessellation are discussed

  14. Binocular visual tracking and grasping of a moving object with a 3D trajectory predictor

    J. Fuentes‐Pacheco

    2009-12-01

    Full Text Available This paper presents a binocular eye‐to‐hand visual servoing system that is able to track and grasp a moving object in real time.Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting futurepositions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with sixdegrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client‐server architecture and iscomposed of two main parts: the vision system and the control system. The vision system uses color detection to extract theobject from the background and a tracking technique based on search windows and object moments. The control system usesthe RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port.Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.

  15. Retrieval of 3D-Position af a Passive Object Using Infrared LED's and Photodiodes

    Christensen, Henrik Vie

    2005-01-01

    intensity of the light reflected by the object is measured by the receivers. The emitter/receiver pairs are fixed positioned > >in a 2D-plane. A model, of the light reflections from IR-emitters to IR-receivers, is used to determine the position of a ball using a Nelder-Mead simplex algorithm. Laboratory...

  16. THREE-IMAGE MATCHING.FOR 3-D LINEAR OBJECT TRACKING

    2000-01-01

    This paper will discuss strategies for trinocular image rectification and matching for linear object tracking. It is well known that a pair of stereo images generates two epipolar images. Three overlapped images can yield six epipolar images in situations where any two are required to be rectified for the purpose of image matching. In this case,the search for feature correspondences is computationally intensive and matching complexity increases. A special epipolar image rectification for three stereo images, which simplifies the image matching process, is therefore proposed. This method generates only three rectified images, with the result that the search for matching features becomes more straightforward. With the three rectified images, a particular line-segment-based correspondence strategy is suggested. The primary characteristics of the feature correspondence strategy include application of specific epipolar geometric constraints and reference to three-ray triangulation residuals in object space.

  17. Spatio-Temporal Video Object Segmentation via Scale-Adaptive 3D Structure Tensor

    Hai-Yun Wang

    2004-06-01

    Full Text Available To address multiple motions and deformable objects' motions encountered in existing region-based approaches, an automatic video object (VO segmentation methodology is proposed in this paper by exploiting the duality of image segmentation and motion estimation such that spatial and temporal information could assist each other to jointly yield much improved segmentation results. The key novelties of our method are (1 scale-adaptive tensor computation, (2 spatial-constrained motion mask generation without invoking dense motion-field computation, (3 rigidity analysis, (4 motion mask generation and selection, and (5 motion-constrained spatial region merging. Experimental results demonstrate that these novelties jointly contribute much more accurate VO segmentation both in spatial and temporal domains.

  18. Multi sensor fusion of camera and 3D laser range finder for object recognition

    Klimentjew, Denis; Hendrich, Norman; Zhang, jianwei

    2010-01-01

    This paper proposes multi sensor fusion based on an effective calibration method for a perception system designed for mobile robots and intended for later object recognition. The perception system consists of a camera and a three-dimensional laser range finder. The three-dimensional laser range finder is based on a two-dimensional laser scanner and a pan-tilt unit as a moving platform. The calibration permits the coalescence of the two most important sensors for three-dim...

  19. Real time object recognition and tracking using 2D/3D images

    Ghobadi, Seyed Eghbal

    2010-01-01

    Object recognition and tracking are the main tasks in computer vision applications such as safety, surveillance, human-robot-interaction, driving assistance system, traffic monitoring, remote surgery, medical reasoning and many more. In all these applications the aim is to bring the visual perception capabilities of the human being into the machines and computers. In this context many significant researches have recently been conducted to open new horizons in computer vision by...

  20. A Morphological Analysis of Audio Objects and their Control Methods for 3D Audio

    Mathew, Justin; Huot, Stéphane; Blum, Alan

    2014-01-01

    International audience Recent technological improvements in audio reproduction systems increased the possibilities to spatialize sources in a listening environment. The spatialization of reproduced audio is highly dependent on the recording technique, the rendering method, and the loudspeaker configuration. While object-based audio production reduces this dependency on loudspeaker configurations, related authoring tools are still difficult to interact with. In this paper, we investigate th...

  1. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    Chellappa R

    2008-01-01

    Full Text Available Abstract We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The "shape filter" has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  2. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    R. Chellappa

    2008-03-01

    Full Text Available We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The “shape filter” has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  3. An object-based approach to image/video-based synthesis and processing for 3-D and multiview televisions

    Chan, SC; Ng, KT; Ho, KL; Gan, ZF; Shum, HY

    2009-01-01

    This paper proposes an object-based approach to a class of dynamic image-based representations called "plenoptic videos," where the plenoptic video sequences are segmented into image-based rendering (IBR) objects each with its image sequence, depth map, and other relevant information such as shape and alpha information. This allows desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects to be supported. Moreover, the rendering...

  4. Visual retrieval of known objects using supplementary depth data

    Śluzek, Andrzej

    2016-06-01

    A simple modification of typical content-based visual information retrieval (CBVIR) techniques (e.g. MSER keypoints represented by SIFT descriptors quantized into sufficiently large vocabularies) is discussed and preliminarily evaluated. By using the approximate depths (as the supplementary data) of the detected keypoints, we can significantly improve credibility of keypoint matching so that known objects (i.e. objects for which exemplary images are available in the database) can be detected at low computational costs. Thus, the method can be particularly useful in real-time applications of machine vision systems (e.g. in intelligent robotic devices). The paper presents theoretical model of the method and provides exemplary results for selected scenarios.

  5. Prototyping a sensor enabled 3D citymodel on geospatial managed objects

    Kjems, Erik; Kolář, Jan

    2013-01-01

    one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily...... software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within...... by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping...

  6. A 3D City Model with Dynamic Behaviour Based on Geospatial Managed Objects

    Kjems, Erik; Kolář, Jan

    2014-01-01

    occasions we have been advocating for a new and advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This chapter presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept...... software packages and traditional data exchange. The data stream is varying from domain to domain and from system to system why it is almost impossible to design an unifying system taking care of all thinkable instances now and in the future within one constraint software design complex. On several...... of GMO’s have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations....

  7. ELTs Adaptive Optics for Multi-Objects 3D Spectroscopy Key Parameters and Design Rules

    Neichel, B; Fusco, T; Gendron, E; Puech, M; Rousset, G; Hammer, F

    2006-01-01

    In the last few years, new Adaptive Optics [AO] techniques have emerged to answer new astronomical challenges: Ground-Layer AO [GLAO] and Multi-Conjugate AO [MCAO] to access a wider Field of View [FoV], Multi-Object AO [MOAO] for the simultaneous observation of several faint galaxies, eXtreme AO [XAO] for the detection of faint companions. In this paper, we focus our study to one of these applications : high red-shift galaxy observations using MOAO techniques in the framework of Extremely Large Telescopes [ELTs]. We present the high-level specifications of a dedicated instrument. We choose to describe the scientific requirements with the following criteria : 40% of Ensquared Energy [EE] in H band (1.65um) and in an aperture size from 25 to 150 mas. Considering these specifications we investigate different AO solutions thanks to Fourier based simulations. Sky Coverage [SC] is computed for Natural and Laser Guide Stars [NGS, LGS] systems. We show that specifications are met for NGS-based systems at the cost of ...

  8. RECONSTRUCCIÓN DE OBJETO 3D A PARTIR DE IMÁGENES CALIBRADAS 3D OBJECT RECONSTRUCTION WITH CALIBRATED IMAGES

    Natividad Grandón-Pastén; Diego Aracena-Pizarro; Clésio Luis Tozzi

    2007-01-01

    Este trabajo presenta el desarrollo de un sistema de reconstrucción de objeto 3D, a partir de una colección de vistas. El sistema se compone de dos módulos principales. El primero realiza el procesamiento de imagen, cuyo objetivo es determinar el mapa de profundidad en un par de vistas, donde cada par de vistas sucesivas sigue una secuencia de fases: detección de puntos de interés, correspondencia de puntos y reconstrucción de puntos; en el proceso de reconstrucción se determinan los parámetr...

  9. Real-Time Propagation Measurement System and Scattering Object Identification by 3D Visualization by Using VRML for ETC System

    Ando Tetsuo

    2009-01-01

    Full Text Available In the early deployment of electric toll collecting (ETC system, multipath interference has caused the malfunction of the system. Therefore, radio absorbers are installed in the toll gate to suppress the scattering effects. This paper presents a novel radio propagation measurement system using the beamforming with 8-elmenet antenna array to examine the power intensity distribution of the ETC gate in real time without closing the toll gates that are already open for traffic. In addition, an identification method of the individual scattering objects with 3D visualization by using virtual reality modeling language will be proposed and the validity is also demonstrated by applying to the measurement data.

  10. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects.

    Ye, Zhou; Nain, Amrinder S; Behkam, Bahareh

    2016-07-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10(-7) m(2) s(-1)) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b(1.5)∝D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features. PMID:27283144

  11. Determinig of an object orientation in 3D space using direction cosine matrix and non-stationary Kalman filter

    Bieda Robert

    2016-06-01

    Full Text Available This paper describes a method which determines the parameters of an object orientation in 3D space. The rotation angles calculation bases on the signals fusion obtained from the inertial measurement unit (IMU. The IMU measuring system provides information from a linear acceleration sensors (accelerometers, the Earth’s magnetic field sensors (magnetometers and the angular velocity sensors (gyroscopes. Information about the object orientation is presented in the form of direction cosine matrix whose elements are observed in the state vector of the non-stationary Kalman filter. The vector components allow to determine the rotation angles (roll, pitch and yaw associated with the object. The resulting waveforms, for different rotation angles, have no negative attributes associated with the construction and operation of the IMU measuring system. The described solution enables simple, fast and effective implementation of the proposed method in the IMU measuring systems.

  12. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured for TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery

  13. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    Lee, M; Suh, T [Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Han, B; Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); Jenkins, C [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); Department of Mechanical Engineering, Stanford University, Palo Alto, CA (United States)

    2015-06-15

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured for TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery.

  14. Parallel phase-shifting digital holography and its application to high-speed 3D imaging of dynamic object

    Awatsuji, Yasuhiro; Xia, Peng; Wang, Yexin; Matoba, Osamu

    2016-03-01

    Digital holography is a technique of 3D measurement of object. The technique uses an image sensor to record the interference fringe image containing the complex amplitude of object, and numerically reconstructs the complex amplitude by computer. Parallel phase-shifting digital holography is capable of accurate 3D measurement of dynamic object. This is because this technique can reconstruct the complex amplitude of object, on which the undesired images are not superimposed, form a single hologram. The undesired images are the non-diffraction wave and the conjugate image which are associated with holography. In parallel phase-shifting digital holography, a hologram, whose phase of the reference wave is spatially and periodically shifted every other pixel, is recorded to obtain complex amplitude of object by single-shot exposure. The recorded hologram is decomposed into multiple holograms required for phase-shifting digital holography. The complex amplitude of the object is free from the undesired images is reconstructed from the multiple holograms. To validate parallel phase-shifting digital holography, a high-speed parallel phase-shifting digital holography system was constructed. The system consists of a Mach-Zehnder interferometer, a continuous-wave laser, and a high-speed polarization imaging camera. Phase motion picture of dynamic air flow sprayed from a nozzle was recorded at 180,000 frames per second (FPS) have been recorded by the system. Also phase motion picture of dynamic air induced by discharge between two electrodes has been recorded at 1,000,000 FPS, when high voltage was applied between the electrodes.

  15. Orientation of a 3D object: implementation with an artificial neural network using a programmable logic device

    Complex information extraction from images is a key skill of intelligent machines, with wide application in automated systems, robotic manipulation and human-computer interaction. However, solving this problem with traditional, geometric or analytical, strategies is extremely difficult. Therefore, an approach based on learning from examples seems to be more appropriate. This thesis addresses the problem of 3D orientation, aiming to estimate the angular coordinates of a known object from an image shot from any direction. We describe a system based on artificial neural networks to solve this problem in real time. The implementation is performed using a programmable logic device. The digital system described in this paper has the ability to estimate two rotational coordinates of a 3D known object, in ranges from -800 to 800. The operation speed allows a real time performance at video rate. The system accuracy can be successively increased by increasing the size of the artificial neural network and using a larger number of training examples

  16. A single photon detector array with 64x64 resolution and millimetric depth accuracy for 3D imaging

    Niclass, Cristiano; Charbon, Edoardo

    2005-01-01

    An avalanche photodiode array uses single-photon counting to perform time-of-flight range-finding on a scene uniformly hit by 100ps 250mW uncollimated laser pulses. The 32x32 pixel sensor, fabricated in a 0.8μm CMOS process uses a microscanner package to enhance the effective resolution in the application to 64x64 pixels. The application achieves a measurement depth resolution of 1.3mm to a depth of 3.75m.

  17. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR – combining techniques towards an objective multiparametric interpretation

    S. Schneiderwind

    2015-09-01

    Full Text Available Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-investigations.

  18. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102

  19. Ryukyu Subduction Zone: 3D Geodynamic Simulations of the Effects of Slab Shape and Depth on Lattice-Preferred Orientation (LPO) and Seismic Anisotropy

    Tarlow, S.; Tan, E.; Billen, M. I.

    2015-12-01

    At the Ryukyu subduction zone, seismic anisotropy observations suggest that there may be strong trench-parallel flow within the mantle wedge driven by complex 3D slab geometry. However, previous simulations have either failed to account for 3D flow or used the infinite strain axis (ISA) approximation for LPO, which is known to be inaccurate in complex flow fields. Additionally, both the slab depth and shape of the Ryukyu slab are contentious. Development of strong trench-parallel flow requires low viscosity to decouple the mantle wedge from entrainment by the sinking slab. Therefore, understanding the relationship between seismic anisotropy and the accompanying flow field will better constrain the material and dynamic properties of the mantle near subduction zones. In this study, we integrate a kinematic model for calculation of LPO (D-Rex) into a buoyancy-driven, instantaneous 3D flow simulation (ASPECT), using composite non-Newtonian rheology to investigate the dependence of LPO on slab geometry and depth at the Ryukyu Trench. To incorporate the 3D flow effects, the trench and slab extends from the southern tip of Japan to the western edge of Taiwan and the model region is approximately 1/4 of a spherical shell extending from the surface to the core-mantle boundary. In the southern-most region we vary the slab depth and shape to test for the effects of the uncertainties in the observations. We also investigate the effect of adding locally hydrated regions above the slab that affect both the mantle rheology and development of LPO through the consequent changes in mantle flow and dominate (weakest) slip system. We characterize how changes in the simulation conditions affect the LPO within the mantle wedge, subducting slab and sub-slab mantle and relate these to surface observations of seismic anisotropy.

  20. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  1. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Carlos M. Mateo

    2016-05-01

    Full Text Available Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor

  2. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  3. Identifying Objective EEG Based Markers of Linear Vection in Depth

    Palmisano, Stephen; Barry, Robert J.; De Blasio, Frances M.; Fogarty, Jack S.

    2016-01-01

    This proof-of-concept study investigated whether a time-frequency EEG approach could be used to examine vection (i.e., illusions of self-motion). In the main experiment, we compared the event-related spectral perturbation (ERSP) data of 10 observers during and directly after repeated exposures to two different types of optic flow display (each was 35° wide by 29° high and provided 20 s of motion stimulation). Displays consisted of either a vection display (which simulated constant velocity forward self-motion in depth) or a control display (a spatially scrambled version of the vection display). ERSP data were decomposed using time-frequency Principal Components Analysis (t–f PCA). We found an increase in 10 Hz alpha activity, peaking some 14 s after display motion commenced, which was positively associated with stronger vection ratings. This followed decreases in beta activity, and was also followed by a decrease in delta activity; these decreases in EEG amplitudes were negatively related to the intensity of the vection experience. After display motion ceased, a series of increases in the alpha band also correlated with vection intensity, and appear to reflect vection- and/or motion-aftereffects, as well as later cognitive preparation for reporting the strength of the vection experience. Overall, these findings provide support for the notion that EEG can be used to provide objective markers of changes in both vection status (i.e., “vection/no vection”) and vection strength. PMID:27559328

  4. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  5. Depth-of-Focus Correction in Single-Molecule Data Allows Analysis of 3D Diffusion of the Glucocorticoid Receptor in the Nucleus.

    Rolf Harkes

    Full Text Available Single-molecule imaging of proteins in a 2D environment like membranes has been frequently used to extract diffusive properties of multiple fractions of receptors. In a 3D environment the apparent fractions however change with observation time due to the movements of molecules out of the depth-of-field of the microscope. Here we developed a mathematical framework that allowed us to correct for the change in fraction size due to the limited detection volume in 3D single-molecule imaging. We applied our findings on the mobility of activated glucocorticoid receptors in the cell nucleus, and found a freely diffusing fraction of 0.49±0.02. Our analysis further showed that interchange between this mobile fraction and an immobile fraction does not occur on time scales shorter than 150 ms.

  6. A new 3D Moho depth model for Iran based on the terrestrial gravity data and EGM2008 model

    Kiamehr, R.; Gómez-Ortiz, D.

    2009-04-01

    Knowledge of the variation of crustal thickness is essential in many applications, such as forward dynamic modelling, numerical heat flow calculations and seismologic applications. Dehghani in 1984 estimated the first Moho depth model over the Iranian plateau using the simple profiling method and Bouguer gravity data. However, these data are high deficiencies and lack of coverage in most part of the region. To provide a basis for an accurate analysis of the region's lithospheric stresses, we develop an up to date three dimensional crustal thickness model of the Iranian Plateau using Parker-Oldenburg iterative method. This method is based on a relationship between the Fourier transform of the gravity anomaly and the sum of the Fourier transform of the interface topography. The new model is based on the new and most complete gravity database of Iran which is produced by Kiamehr for computation of the high resolution geoid model for Iran. Total number of 26125 gravity data were collected from different sources and used for generation an outlier-free 2x2 minutes gravity database for Iran. At the mean time, the Earth Gravitational Model (EGM2008) up to degree 2160 has been developed and published by National Geospatial Intelligence Agency. EGM2008 incorporates improved 5x5 minutes gravity anomalies and has benefited from the latest GRACE based satellite solutions. The major benefit of the EGM2008 is its ability to provide precise and uniform gravity data with global data coverage. Two different Moho depth models have been computed based on the terrestrial and EGM2008 datasets. The minimum and maximum Moho depths for land and EGM2008 models are 10.85-53.86 and 15.41-51.43 km, respectively. In general, we found a good agreement between the Moho geometry obtained using both land and EGM2008 datasets with the RMS of 2.7 km. Also, we had a comparison between these gravimetric Moho models versus global seismic crustal models CRUST 2.0. The differences between EGM2008 and land

  7. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  8. Impacts of 3-D radiative effects on satellite cloud detection and their consequences on cloud fraction and aerosol optical depth retrievals

    Yang, Yuekui; di Girolamo, Larry

    2008-02-01

    We present the first examination on how 3-D radiative transfer impacts satellite cloud detection that uses a single visible channel threshold. The 3-D radiative transfer through predefined heterogeneous cloud fields embedded in a range of horizontally homogeneous aerosol fields have been carried out to generate synthetic nadir-viewing satellite images at a wavelength of 0.67 μm. The finest spatial resolution of the cloud field is 30 m. We show that 3-D radiative effects cause significant histogram overlap between the radiance distribution of clear and cloudy pixels, the degree to which depends on many factors (resolution, solar zenith angle, surface reflectance, aerosol optical depth (AOD), cloud top variability, etc.). This overlap precludes the existence of a threshold that can correctly separate all clear pixels from cloudy pixels. The region of clear/cloud radiance overlap includes moderately large (up to 5 in our simulations) cloud optical depths. Purpose-driven cloud masks, defined by different thresholds, are applied to the simulated images to examine their impact on retrieving cloud fraction and AOD. Large (up to 100s of %) systematic errors were observed that depended on the type of cloud mask and the factors that influence the clear/cloud radiance overlap, with a strong dependence on solar zenith angle. Different strategies in computing domain-averaged AOD were performed showing that the domain-averaged BRF from all clear pixels produced the smallest AOD biases with the weakest (but still large) dependence on solar zenith angle. The large dependence of the bias on solar zenith angle has serious implications for climate research that uses satellite cloud and aerosol products.

  9. Depth-selective imaging of macroscopic objects hidden behind a scattering layer using low-coherence and wide-field interferometry

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Ko, Hakseok; Choi, Wonshik

    2016-08-01

    Imaging systems targeting macroscopic objects tend to have poor depth selectivity. In this Letter, we present a 3D imaging system featuring a depth resolution of 200 μm, depth scanning range of more than 1 m, and view field larger than 70×70 mm2. For depth selectivity, we set up an off-axis digital holographic imaging system using a light source with a coherence length of 400 μm. A prism pair was installed in the reference beam path for long-range depth scanning. We performed imaging macroscopic targets with multiple different layers and also demonstrated imaging targets hidden behind a scattering layer.

  10. An In-depth Analysis of Applications of Object Recognition

    Abijith Sankar; Akash Suresh; P. Varun Babu; A. Baskar; Shriram K. Vasudevan

    2015-01-01

    Image processing has become one of the most unavoidable fields of engineering. The way the applications are designed based on Image processing is simply superb. This study is drafted as a study paper aimed at reviewing the object recognition techniques supported in Image Processing Sector. Analyzing object recognition through the applications is a new approach and that is what we have tried through our paper. We have taken effort to check the utilization of Object Recognition techniques in th...

  11. EUROPEANA AND 3D

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  12. A simple single camera 3C3D velocity measurement technique without errors due to depth of correlation and spatial averaging for microfluidics

    A method to determine the three components (3C) of the velocity field in a micro volume (3D) using a single camera is proposed. The technique is based on tracking the motion of individual particles to exclude errors due to depth of correlation (DOC) and spatial averaging as in µPIV (micro particle image velocimetry). The depth position of the particles is coded by optical distortions initiated by a cylindrical lens in the optical setup. To estimate the particle positions, a processing algorithm was developed based on continuous wavelet analysis and autocorrelation. This algorithm works robustly and gives accurate results comparable to multi-camera systems (tomographic PIV, V3V). Particle tracking was applied to determine the full 3C velocity vector in the volume without the error due to spatial averaging and DOC, which are inherent limitations in µPIV due to the interrogation windows size and volume illumination. To prove the applicability, measurements were performed in a straight channel with a cross section of 500 × 500 µm2. The depth of the measurement volume in the viewing direction was chosen to be 90 µm in order to resolve the near-wall gradients. The three-dimensional velocity distribution of the whole channel could be resolved clearly by using wave front deformation particle tracking velocimetry

  13. An In-depth Analysis of Applications of Object Recognition

    Abijith Sankar

    2015-05-01

    Full Text Available Image processing has become one of the most unavoidable fields of engineering. The way the applications are designed based on Image processing is simply superb. This study is drafted as a study paper aimed at reviewing the object recognition techniques supported in Image Processing Sector. Analyzing object recognition through the applications is a new approach and that is what we have tried through our paper. We have taken effort to check the utilization of Object Recognition techniques in the fields of Industrial applications which includes a. automobiles b. food and beverage sector and c. fabric sector. Then attention is paid towards robotic applications. Remote sensing is also observed to be one of the hottest sectors which deploys objects recognition techniques to a better extent. Finally it is ended up with medicinal applications.

  14. 3D Object Visual Tracking for the 220 kV/330 kV High-Voltage Live-Line Insulator Cleaning Robot

    ZHANG Jian; YANG Ru-qing

    2009-01-01

    The 3D object visual tracking problem is studied for the robot vision system of the 220 kV/330 kV high-voltage live-line insulator cleaning robot. The SUSAN Edge based Scale Invariant Feature (SESIF) algorithm based 3D objects visual tracking is achieved in three stages: the first frame stage, tracking stage, and recovering stage. An SESIF based objects recognition algorithm is proposed to fred initial location at both the first frame stage and recovering stage. An SESIF and Lie group based visual tracking algorithm is used to track 3D object. Experiments verify the algorithm's robustness. This algorithm will be used in the second generation of the 220 kV/330 kV high-voltage five-line insulator cleaning robot.

  15. Integrating depth functions and hyper-scale terrain analysis for 3D soil organic carbon modeling in agricultural fields at regional scale

    Ramirez-Lopez, L.; van Wesemael, B.; Stevens, A.; Doetterl, S.; Van Oost, K.; Behrens, T.; Schmidt, K.

    2012-04-01

    different depth functions, ii. The use of different machine learning approaches for modeling the parameters of the fitted depth functions using the ConMap features and iii. The influence of different spatial scales on the SOC profile distribution variability. Keywords: 3D modeling, Digital soil mapping, Depth functions, Terrain analysis. Reference Behrens, T., K. Schmidt, K., Zhu, A.X. Scholten, T. 2010. The ConMap approach for terrain-based digital soil mapping. European Journal of Soil Science, v. 61, p.133-143.

  16. A model for calculating the errors of 2D bulk analysis relative to the true 3D bulk composition of an object, with application to chondrules

    Hezel, Dominik C.

    2007-09-01

    Certain problems in Geosciences require knowledge of the chemical bulk composition of objects, such as, for example, minerals or lithic clasts. This 3D bulk chemical composition (bcc) is often difficult to obtain, but if the object is prepared as a thin or thick polished section a 2D bcc can be easily determined using, for example, an electron microprobe. The 2D bcc contains an error relative to the true 3D bcc that is unknown. Here I present a computer program that calculates this error, which is represented as the standard deviation of the 2D bcc relative to the real 3D bcc. A requirement for such calculations is an approximate structure of the 3D object. In petrological applications, the known fabrics of rocks facilitate modeling. The size of the standard deviation depends on (1) the modal abundance of the phases, (2) the element concentration differences between phases and (3) the distribution of the phases, i.e. the homogeneity/heterogeneity of the object considered. A newly introduced parameter " τ" is used as a measure of this homogeneity/heterogeneity. Accessory phases, which do not necessarily appear in 2D thin sections, are a second source of error, in particular if they contain high concentrations of specific elements. An abundance of only 1 vol% of an accessory phase may raise the 3D bcc of an element by up to a factor of ˜8. The code can be queried as to whether broad beam, point, line or area analysis technique is best for obtaining 2D bcc. No general conclusion can be deduced, as the error rates of these techniques depend on the specific structure of the object considered. As an example chondrules—rapidly solidified melt droplets of chondritic meteorites—are used. It is demonstrated that 2D bcc may be used to reveal trends in the chemistry of 3D objects.

  17. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  18. Modeling of 3-D Object Manipulation by Multi-Joint Robot Fingers under Non-Holonomic Constraints and Stable Blind Grasping

    Arimoto, Suguru; Yoshida, Morio; Bae, Ji-Hun

    This paper derives a mathematical model that expresses motion of a pair of multi-joint robot fingers with hemi-spherical rigid ends grasping and manipulating a 3-D rigid object with parallel flat surfaces. Rolling contacts arising between finger-ends and object surfaces are taken into consideration and modeled as Pfaffian constraints from which constraint forces emerge tangentially to the object surfaces. Another noteworthy difference of modeling of motion of a 3-D object from that of a 2-D object is that the instantaneous axis of rotation of the object is fixed in the 2-D case but that is time-varying in the 3-D case. A further difficulty that has prevented us to model 3-D physical interactions between a pair of fingers and a rigid object lies in the problem of treating spinning motion that may arise around the opposing axis from a contact point between one finger-end with one side of the object to another contact point. This paper shows that, once such spinning motion stops as the object mass center approaches just beneath the opposition axis, then this cease of spinning evokes a further nonholonomic constraint. Hence, the multi-body dynamics of the overall fingers-object system is subject to non-holonomic constraints concerning a 3-D orthogonal matrix expressing three mutually orthogonal unit vectors fixed at the object together with an extra non-holonomic constraint that the instantaneous axis of rotation of the object is always orthogonal to the opposing axis. It is shown that Lagrange's equation of motion of the overall system can be derived without violating the causality that governs the non-holonomic constraints. This immediately suggests possible construction of a numerical simulator of multi-body dynamics that can express motion of the fingers and object physically interactive to each other. By referring to the fact that human grasp an object in the form of precision prehension dynamically and stably by using opposable force between the thumb and another

  19. Volumetric 3D display using a DLP projection engine

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  20. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  1. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  2. Analyse subjective et évaluation objective de la qualité perceptuelle des maillages 3D

    Torkhani, Fakhri

    2014-01-01

    Les maillages 3D polygonaux sont largement utilisés dans diverses applications telles que le divertissement numérique, la conception assistée par ordinateur et l'imagerie médicale. Un maillage peut être soumis à différents types d'opérations comme la compression, le tatouage ou la simplification qui introduisent des distorsions géométriques (modifications) à la version originale. Il est important de quantifier ces modification introduites au maillage d'origine et d'évaluer la qualité perceptu...

  3. Analyse subjective et évaluation objective de la qualité perceptuelle des maillages 3D

    Torkhani, Fakhri

    2014-01-01

    Les maillages 3D polygonaux sont largement utilisés dans diverses applications telles que le divertissement numérique, la conception assistée par ordinateur et l’imagerie médicale. Un maillage peut être soumis à différents types d’opérations comme la compression, le tatouage ou la simplification qui introduisent des distorsions géométriques (modifications) par rapport à la version originale. Il est important de quantifier ces modification introduites au maillage d’origine et d’évaluer la qual...

  4. Comparison of publically available Moho depth and crustal thickness grids with newly derived grids by 3D gravity inversion for the High Arctic region.

    Lebedeva-Ivanova, Nina; Gaina, Carmen; Minakov, Alexander; Kashubin, Sergey

    2016-04-01

    We derived Moho depth and crustal thickness for the High Arctic region by 3D forward and inverse gravity modelling method in the spectral domain (Minakov et al. 2012) using lithosphere thermal gravity anomaly correction (Alvey et al., 2008); a vertical density variation for the sedimentary layer and lateral crustal variation density. Recently updated grids of bathymetry (Jakobsson et al., 2012), gravity anomaly (Gaina et al, 2011) and dynamic topography (Spasojevic & Gurnis, 2012) were used as input data for the algorithm. TeMAr sedimentary thickness grid (Petrov et al., 2013) was modified according to the most recently published seismic data, and was re-gridded and utilized as input data. Other input parameters for the algorithm were calibrated using seismic crustal scale profiles. The results are numerically compared with publically available grids of the Moho depth and crustal thickness for the High Arctic region (CRUST 1 and GEMMA global grids; the deep Arctic Ocean grids by Glebovsky et al., 2013) and seismic crustal scale profiles. The global grids provide coarser resolution of 0.5-1.0 geographic degrees and not focused on the High Arctic region. Our grids better capture all main features of the region and show smaller error in relation to the seismic crustal profiles compare to CRUST 1 and GEMMA grids. Results of 3D gravity modelling by Glebovsky et al. (2013) with separated geostructures approach show also good fit with seismic profiles; however these grids cover the deep part of the Arctic Ocean only. Alvey A, Gaina C, Kusznir NJ, Torsvik TH (2008). Integrated crustal thickness mapping and plate recon-structions for the high Arctic. Earth Planet Sci Lett 274:310-321. Gaina C, Werner SC, Saltus R, Maus S (2011). Circum-Arctic mapping project: new magnetic and gravity anomaly maps of the Arctic. Geol Soc Lond Mem 35, 39-48. Glebovsky V.Yu., Astafurova E.G., Chernykh A.A., Korneva M.A., Kaminsky V.D., Poselov V.A. (2013). Thickness of the Earth's crust in the

  5. Método de Regularización de Mallas Cuadrilaterales en Reconstrucción de Objetos 3D Regularization Method of Quadrilaterals Mesh for 3D Object Reconstruction

    Sandra P Mateus

    2008-01-01

    Full Text Available Se propone un método de regularización de una malla cuadrilateral mediante Geodésicas y B-Splines aplicado a la reconstrucción de objetos 3D. El procedimiento realizado, se resume en tres etapas principales: i selección de cuadriláteros; ii regularización de los cuadriláteros y generación de puntos, utilizando B-Splines; y iii emparejamiento de puntos regularizados mediante geodésicas con el método de la marcha rápida (fast marching method, FMM. En el proceso de experimentación, la regularización de la malla cuadrilateral y la representación computacional de los modelos se hicieron con una imagen de rango del objeto cultural moai. A pesar de que el objeto tiene topología arbitraria irregular, el método propuesto dio resultados adecuados en la conservación de los detalles finos del objeto.A regularization method of a quadrilateral mesh by means Geodesics and B-Splines, applied to 3D objects reconstruction, is proposed. The procedure can be summarized in three main steps: i selection of quadrilaterals; ii regularization of quadrilaterals and generation of points using B-Splines; and iii matching regularized points by means of Fast Marching Method geodesic (FMM. In the process of experimentation, the regularization of the representation of the quadrilateral mesh and the representation of the computational models were done with a range image of the cultural object moai. Despite having an irregular arbitrary topology, the proposed method gave adequate results in the conservation of the fine detail of the object.

  6. Developing a 3-D Digital Heritage Ecosystem: from object to representation and the role of a virtual museum in the 21st century

    Fred Limp

    2011-07-01

    Full Text Available This article addresses the application of high-precision 3-D recording methods to heritage materials (portable objects, the technical processes involved, the various digital products and the role of 3-D recording in larger questions of scholarship and public interpretation. It argues that the acquisition and creation of digital representations of heritage must be part of a comprehensive research infrastructure (a digital ecosystem that focuses on all of the elements involved, including (a recording methods and metadata, (b digital object discovery and access, (c citation of digital objects, (d analysis and study, (e digital object reuse and repurposing, and (f the critical role of a national/international digital archive. The article illustrates these elements and their relationships using two case studies that involve similar approaches to the high-precision 3-D digital recording of portable archaeological objects, from a number of late pre-Columbian villages and towns in the mid-central US (c. 1400 CE and from the Egyptian site of Amarna, the Egyptian Pharaoh Akhenaten's capital (c. 1300 BCE.

  7. Potential field Modeling of the 3-D Geologic Structure of the San Andreas Fault Observatory at Depth (SAFOD) at Parkfield, California

    McPhee, D. K.

    2003-12-01

    Gravity and magnetic data, along with other geophysical and geological constraints, are used to develop 2-D models that we use to characterize the 3-D geological structure of the San Andreas fault (SAF) zone in the vicinity of SAFOD near Parkfield, CA. The gravity data, reduced to isostatic anomalies, comprise a compilation of three different data sets with a maximum of 1.6 km grid spacing for the scattered data and closely spaced ( ˜40 m) stations along one SW-NE profile crossing the SAFOD pilot hole. Aeromagnetic data were flown at a nominal 300 m above the terrain along SW-NE flight lines perpendicular to the San Andreas Fault. Data were recorded at ˜50 m spacing along flight lines approximately 800 m apart. Ground magnetic data recorded every 5 m along lines ˜300 m apart cover a 3 x 5 km area surrounding the SAFOD pilot hole. Previous modeling showed that magnetic granitic basement rocks southwest of the SAF are divided by an inferred steep fault sub-parallel to the SAF. We compute 2-D crustal models along 5 km-long southwest-northeast profiles, one of which extends through the SAFOD pilot hole near and along the high-resolution seismic refraction/reflection survey completed in 1998 (Catchings et al., 2002). Our models are constrained by pilot hole measurements, where we see a boundary between sediment and granitic basement at ˜770 m and an order of magnitude increase in magnetic susceptibility at ˜1400 m, possibly the same depth at which the SW dipping Buzzard Canyon Fault intersects the pilot hole. Regional gravity, magnetic and geologic data indicate two very distinct basement blocks separated by a steeply dipping SAF. The shallowly dipping sedimentary section SW of the SAF coincides with the low velocity zone observed with seismic measurements. Shallow slivers of magnetic sandstone on the NE side of the SAF explain higher frequency features in the magnetic data. In addition, we show a flat lying, tabular body of serpentinite sandwiched between 2 blocks

  8. The Scheme and the Preliminary Test of Object-Oriented Simultaneous 3D Geometric and Physical Change Detection Using GIS-guided Knowledge

    Chang LI

    2013-07-01

    Full Text Available Current methods of remotely sensed image change detection almost assume that the DEM of the surface objects do not change. However, for the geological disasters areas (such as: landslides, mudslides and avalanches, etc., this assumption does not hold. And the traditional approach is being challenged. Thus, a new theory for change detection needs to be extended from two-dimensional (2D to three-dimensional (3D urgently. This paper aims to present an innovative scheme for change detection method, object-oriented simultaneous three-dimensional geometric and physical change detection (OOS3DGPCD using GIS-guided knowledge. This aim will be reached by realizing the following specific objectives: a to develop a set of automatic multi-feature matching and registration methods; b to propose an approach for simultaneous detecting 3D geometric and physical attributes changes based on the object-oriented strategy; c to develop a quality control method for OOS3DGPCD; d to implement the newly proposed OOS3DGPCD method by designing algorithms and developing a prototype system. For aerial remotely sensed images of YingXiu, Wenchuan, preliminary experimental results of 3D change detection are shown so as to verify our approach.

  9. Tracking 3D Moving Objects Based on GPS/IMU Navigation Solution, Laser Scanner Point Cloud and GIS Data

    Siavash Hosseinyalamdary

    2015-07-01

    Full Text Available Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM, can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS and Inertial Measurement Unit (IMU navigation solution.

  10. 3D and Education

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  11. Contextual effects of scene on the visual perception of object orientation in depth.

    Ryosuke Niimi

    Full Text Available We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1. When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2. This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3. Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.

  12. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  13. Retrieval of 3D-position of a Passive Object Using Infrared LED´s and Photodiodes

    Christensen, Henrik Vie

    intensity of the light reflected by the object is measured by the receivers. The emitter/receiver pairs are fixed positioned in a 2D-plane. A model, of the light reflections from IR-emitters to IR-receivers, is used to determine the position of a ball using a Nelder-Mead simplex algorithm. Laboratory...

  14. Instantaneous 3D EEG Signal Analysis Based on Empirical Mode Decomposition and the Hilbert–Huang Transform Applied to Depth of Anaesthesia

    Mu-Tzu Shih

    2015-02-01

    Full Text Available Depth of anaesthesia (DoA is an important measure for assessing the degree to which the central nervous system of a patient is depressed by a general anaesthetic agent, depending on the potency and concentration with which anaesthesia is administered during surgery. We can monitor the DoA by observing the patient’s electroencephalography (EEG signals during the surgical procedure. Typically high frequency EEG signals indicates the patient is conscious, while low frequency signals mean the patient is in a general anaesthetic state. If the anaesthetist is able to observe the instantaneous frequency changes of the patient’s EEG signals during surgery this can help to better regulate and monitor DoA, reducing surgical and post-operative risks. This paper describes an approach towards the development of a 3D real-time visualization application which can show the instantaneous frequency and instantaneous amplitude of EEG simultaneously by using empirical mode decomposition (EMD and the Hilbert–Huang transform (HHT. HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMFs. The Hilbert spectral analysis method is then used to obtain instantaneous frequency data. The HHT provides a new method of analyzing non-stationary and nonlinear time series data. We investigate this approach by analyzing EEG data collected from patients undergoing surgical procedures. The results show that the EEG differences between three distinct surgical stages computed by using sample entropy (SampEn are consistent with the expected differences between these stages based on the bispectral index (BIS, which has been shown to be quantifiable measure of the effect of anaesthetics on the central nervous system. Also, the proposed filtering approach is more effective compared to the standard filtering method in filtering out signal noise resulting in more consistent results than those provided by the BIS. The proposed approach is therefore

  15. Measuring gaze and pupil in the real world: object-based attention,3D eye tracking and applications

    Stoll, Josef

    2015-01-01

    This dissertation contains studies on visual attention, as measured by gaze orientation, and the use of mobile eye-tracking and pupillometry in applications. It combines the development of methods for mobile eye-tracking (studies II and III) with experimental studies on gaze guidance and pupillary responses in patients (studies IV and VI) and healthy observers (studies I and V). Object based attention / Study I What is the main fa...

  16. An in-depth spectroscopic examination of molecular bands from 3D hydrodynamical model atmospheres I. Formation of the G-band in metal-poor dwarf stars

    Gallagher, A J; Bonifacio, P; Ludwig, H -G; Steffen, M; Spite, M

    2016-01-01

    Recent developments in the three-dimensional (3D) spectral synthesis code Linfor3D have meant that, for the first time, large spectral wavelength regions, such as molecular bands, can be synthesised with it in a short amount of time. A detailed spectral analysis of the synthetic G-band for several dwarf turn-off-type 3D atmospheres (5850 <= T_eff [K] <= 6550, 4.0 <= log g <= 4.5, -3.0 <= [Fe/H] <= -1.0) was conducted, under the assumption of local thermodynamic equilibrium. We also examine carbon and oxygen molecule formation at various metallicity regimes and discuss the impact it has on the G-band. Using a qualitative approach, we describe the different behaviours between the 3D atmospheres and the traditional one-dimensional (1D) atmospheres and how the different physics involved inevitably leads to abundance corrections, which differ over varying metallicities. Spectra computed in 1D were fit to every 3D spectrum to determine the 3D abundance correction. Early analysis revealed that the ...

  17. An object-oriented 3D nodal finite element solver for neutron transport calculations in the Descartes project

    In this paper we present two applications of the Nodal finite elements developed by Hennart and del Valle, first to three-dimensional Cartesian meshes and then to two-dimensional Hexagonal meshes. This work has been achieved within the framework of the DESCARTES project, which is a co-development effort by the 'Commissariat a l'Energie Atomique' (CEA) and 'Electricite de France' (EDF) for the development of a toolbox for reactor core calculations based on object oriented programming. The general structure of this project is based on the object oriented method. By using a mapping technique proposed in Schneider's thesis and del Valle, Mund, we show how this structuration allows us an easy implementation of the hexagonal case from the Cartesian case. The main attractiveness of this methodology is the possibility of a pin-by-pin representation by division of each lozenge into smaller ones. Furthermore, we will explore the use of non structured quadrangles to treat the circular geometry within a hexagon. It remains nevertheless, in the hexagonal case, the implementation of the acceleration of the internal iterations by the DSA (Diffusion Synthetic Acceleration) or the TSA. (authors)

  18. An object-oriented 3D nodal finite element solver for neutron transport calculations in the Descartes project

    Akherraz, B.; Lautard, J.J. [CEA Saclay, Dept. Modelisation de Systemes et Structures, Serv. d' Etudes des Reacteurs et de Modelisation Avancee (DMSS/SERMA), 91 - Gif sur Yvette (France); Erhard, P. [Electricite de France (EDF), Dir. de Recherche et Developpement, Dept. Sinetics, 92 - Clamart (France)

    2003-07-01

    In this paper we present two applications of the Nodal finite elements developed by Hennart and del Valle, first to three-dimensional Cartesian meshes and then to two-dimensional Hexagonal meshes. This work has been achieved within the framework of the DESCARTES project, which is a co-development effort by the 'Commissariat a l'Energie Atomique' (CEA) and 'Electricite de France' (EDF) for the development of a toolbox for reactor core calculations based on object oriented programming. The general structure of this project is based on the object oriented method. By using a mapping technique proposed in Schneider's thesis and del Valle, Mund, we show how this structuration allows us an easy implementation of the hexagonal case from the Cartesian case. The main attractiveness of this methodology is the possibility of a pin-by-pin representation by division of each lozenge into smaller ones. Furthermore, we will explore the use of non structured quadrangles to treat the circular geometry within a hexagon. It remains nevertheless, in the hexagonal case, the implementation of the acceleration of the internal iterations by the DSA (Diffusion Synthetic Acceleration) or the TSA. (authors)

  19. Visual discrimination of rotated 3D objects in Malawi cichlids (Pseudotropheus sp.): a first indication for form constancy in fishes.

    Schluessel, V; Kraniotakes, H; Bleckmann, H

    2014-03-01

    Fish move in a three-dimensional environment in which it is important to discriminate between stimuli varying in colour, size, and shape. It is also advantageous to be able to recognize the same structures or individuals when presented from different angles, such as back to front or front to side. This study assessed visual discrimination abilities of rotated three-dimensional objects in eight individuals of Pseudotropheus sp. using various plastic animal models. All models were displayed in two choice experiments. After successful training, fish were presented in a range of transfer tests with objects rotated in the same plane and in space by 45° and 90° to the side or to the front. In one experiment, models were additionally rotated by 180°, i.e., shown back to front. Fish showed quick associative learning and with only one exception successfully solved and finished all experimental tasks. These results provide first evidence for form constancy in this species and in fish in general. Furthermore, Pseudotropheus seemed to be able to categorize stimuli; a range of turtle and frog models were recognized independently of colour and minor shape variations. Form constancy and categorization abilities may be important for behaviours such as foraging, recognition of predators, and conspecifics as well as for orienting within habitats or territories. PMID:23982620

  20. 3D Spectroscopy of Local Luminous Compact Blue Galaxies: Kinematic Maps of a Sample of 22 Objects

    Pérez-Gallego, J; Castillo-Morales, A; Gallego, J; Castander, F J; Garland, C A; Gruel, N; Pisano, D J; Zamorano, J

    2011-01-01

    We use three dimensional optical spectroscopy observations of a sample of 22 local Luminous Compact Blue Galaxies (LCBGs) to create kinematic maps. By means of these, we classify the kinematics of these galaxies into three different classes: rotating disk (RD), perturbed rotation (PR), and complex kinematics (CK). We find 48% are RDs, 28% are PRs, and 24% are CKs. RDs show rotational velocities that range between $\\sim50$ and $\\sim200 km s^{-1}$, and dynamical masses that range between $\\sim1\\times10^{9}$ and $\\sim3\\times10^{10} M_{\\odot}$. We also address the following two fundamental questions through the study of the kinematic maps: \\emph{(i) What processes are triggering the current starbust in LCBGs?} We search our maps of the galaxy velocity fields for signatures of recent interactions and close companions that may be responsible for the enhanced star formation in our sample. We find 5% of objects show evidence of a recent major merger, 10% of a minor merger, and 45% of a companion. This argues in favor...

  1. How 3-D Movies Work

    吕铁雄

    2011-01-01

    难度:★★★★☆词数:450 建议阅读时间:8分钟 Most people see out of two eyes. This is a basic fact of humanity,but it’s what makes possible the illusion of depth(纵深幻觉) that 3-D movies create. Human eyes are spaced about two inches apart, meaning that each eye gives the brain a slightly different perspective(透视感)on the same object. The brain then uses this variance to quickly determine an object’s distance.

  2. Human Object Recognition Using Colour and Depth Information from an RGB-D Kinect Sensor

    Southwell, Benjamin John; Fang, Gu

    2013-01-01

    Human object recognition and tracking is important in robotics and automation. The Kinect sensor and its SDK have provided a reliable human tracking solution where a constant line of sight is maintained. However, if the human object is lost from sight during the tracking, the existing method cannot recover and resume tracking the previous object correctly. In this paper, a human recognition method is developed based on colour and depth information that is provided from any RGB‐D sensor. In pa...

  3. Development of monocular and binocular multi-focus 3D display systems using LEDs

    Kim, Sung-Kyu; Kim, Dong-Wook; Son, Jung-Young; Kwon, Yong-Moo

    2008-04-01

    Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.

  4. Combined robotic-aided gait training and 3D gait analysis provide objective treatment and assessment of gait in children and adolescents with Acquired Hemiplegia.

    Molteni, Erika; Beretta, Elena; Altomonte, Daniele; Formica, Francesca; Strazzer, Sandra

    2015-08-01

    To evaluate the feasibility of a fully objective rehabilitative and assessment process of the gait abilities in children suffering from Acquired Hemiplegia (AH), we studied the combined employment of robotic-aided gait training (RAGT) and 3D-Gait Analysis (GA). A group of 12 patients with AH underwent 20 sessions of RAGT in addition to traditional manual physical therapy (PT). All the patients were evaluated before and after the training by using the Gross Motor Function Measures (GMFM), the Functional Assessment Questionnaire (FAQ), and the 6 Minutes Walk Test. They also received GA before and after RAGT+PT. Finally, results were compared with those obtained from a control group of 3 AH children who underwent PT only. After the training, the GMFM and FAQ showed significant improvement in patients receiving RAGT+PT. GA highlighted significant improvement in stance symmetry and step length of the affected limb. Moreover, pelvic tilt increased, and hip kinematics on the sagittal plane revealed statistically significant increase in the range of motion during the hip flex-extension. Our data suggest that the combined program RAGT+PT induces improvements in functional activities and gait pattern in children with AH, and it demonstrates that the combined employment of RAGT and 3D-GA ensures a fully objective rehabilitative program. PMID:26737310

  5. Storing a 3d City Model, its Levels of Detail and the Correspondences Between Objects as a 4d Combinatorial Map

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2015-10-01

    3D city models of the same region at multiple LODs are encumbered by the lack of links between corresponding objects across LODs. In practice, this causes inconsistency during updates and maintenance problems. A radical solution to this problem is to model the LOD of a model as a dimension in the geometric sense, such that a set of connected polyhedra at a series of LODs is modelled as a single polychoron—the 4D analogue of a polyhedron. This approach is generally used only conceptually and then discarded at the implementation stage, losing many of its potential advantages in the process. This paper therefore shows that this approach can be instead directly realised using 4D combinatorial maps, making it possible to store all topological relationships between objects.

  6. Development of three types of multifocus 3D display

    Kim, Sung-Kyu; Kim, Dong Wook

    2011-06-01

    Three types of multi-focus(MF) 3D display are developed and possibility about monocular depth cue is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed 3D display system for each eye, which can satisfy accommodation to displayed virtual objects within defined depth. The first MF 3D display is developed via laser scanning method, the second MF 3D display uses LED array for light source, and the third MF 3D display uses slated LED array for full parallax monocular depth cue. The full parallax MF 3D display system gives omnidirectional focus effect. The proposed 3D display systems have a possibility of solving eye fatigue problem that comes from the mismatch between the accommodation of each eye and the convergence of two eyes. The monocular accommodation is tested and a proof of the satisfaction of the full parallax accommodation is given as a result of the proposed full parallax MF 3D display system. We achieved a result that omni-directional focus adjustment is possible via parallax images.

  7. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  8. The practice of physics teaching to carry out the 3D object%物理教学落实三维目标的实践研究

    魏淑芳

    2015-01-01

    我国高职物理教育随着新课程的改革,对三维目标的重视程度已经显著提高,但是对于如何在高职物理教学中理解并实施三维目标,当前学者还有很多不同的见解,很多高职院校依然重视物理技能和显性知识的学习,还没有重视在物理教学中培养学生的情感太和的科学的态度,这在很大程度上影响了学生养成科学素养。本文立足于三维目标的理论基础,而且结合高职物理教学的实际,以实际的案例分析了在高职物理教学中三维目标的落实情况,从而促进新课程的改革。%Higher vocational physical education along with the new curriculum reform in our country, pay more and more attention to the three dimensional goal, but how to understand in the higher vocational physics teaching and implementation of 3D object, the current scholars stil has a lot of different opinions, many higher vocational coleges stil attaches great importance to the physical skils and explicit knowledge learning, has not been valued in the physical teaching to cultivate students emotional scientific attitude, this to a large extent affected the students to form scientific literacy. Based on the theoretical basis for 3D object, combines the actual conditions of higher vocational physics teaching, with the actual case analysis of higher vocational physics teaching, the 3D object to carry out the situation, so as to promote the new curriculum reform.

  9. 3D printing for dummies

    Hausman, Kalani Kirk

    2014-01-01

    Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for

  10. Experimenting with 3D vision on a robotic head

    Clergue, Emmanuelle; Vieville, Thierry

    1995-01-01

    We intend to build a vision system that will allow dynamic 3D-perception of objects of interest. More specifically, we discuss the idea of using 3D visual cues when tracking a visual target, in order to recover some of its 3D characteristics (depth, size, kinematic information). The basic requirements for such a 3D vision module to be embedded on a robotic head are discussed. The experimentation reported here corresponds to an implementation of these general ideas, considering a calibrated ro...

  11. 3D Projection Installations

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article...... contributes to the understanding of the distinctive characteristics of such a new medium, and identifies three strategies for designing 3-D projection installations: establishing space; interplay between the digital and the physical; and transformation of materiality. The principal empirical case, From...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  12. ToF-SIMS depth profiling of cells: z-correction, 3D imaging, and sputter rate of individual NIH/3T3 fibroblasts.

    Robinson, Michael A; Graham, Daniel J; Castner, David G

    2012-06-01

    Proper display of three-dimensional time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging data of complex, nonflat samples requires a correction of the data in the z-direction. Inaccuracies in displaying three-dimensional ToF-SIMS data arise from projecting data from a nonflat surface onto a 2D image plane, as well as possible variations in the sputter rate of the sample being probed. The current study builds on previous studies by creating software written in Matlab, the ZCorrectorGUI (available at http://mvsa.nb.uw.edu/), to apply the z-correction to entire 3D data sets. Three-dimensional image data sets were acquired from NIH/3T3 fibroblasts by collecting ToF-SIMS images, using a dual beam approach (25 keV Bi(3)(+) for analysis cycles and 20 keV C(60)(2+) for sputter cycles). The entire data cube was then corrected by using the new ZCorrectorGUI software, producing accurate chemical information from single cells in 3D. For the first time, a three-dimensional corrected view of a lipid-rich subcellular region, possibly the nuclear membrane, is presented. Additionally, the key assumption of a constant sputter rate throughout the data acquisition was tested by using ToF-SIMS and atomic force microscopy (AFM) analysis of the same cells. For the dried NIH/3T3 fibroblasts examined in this study, the sputter rate was found to not change appreciably in x, y, or z, and the cellular material was sputtered at a rate of approximately 10 nm per 1.25 × 10(13) ions C(60)(2+)/cm(2). PMID:22530745

  13. Combining depth analysis with surface morphology analysis to analyse the prehistoric painted pottery from Majiayao Culture by confocal 3D-XRF

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Lin, Xue; Chen, Man; Peng, Shiqi; Yang, Kui; Wang, Jinbang

    2016-04-01

    The Majiayao Culture (3300 BC-2900 BC) formed one of the three painted pottery centres of the Yellow River basin, China, in prehistoric times. Painted pottery from this period is famous for its exquisite workmanship and meticulous painting. Studying the layer structure and element distribution of the paint on the pottery is conducive to investigating its workmanship, which is important for archaeological research. However, the most common analysis methods are destructive. To investigate the layers of paint on the pottery nondestructively, a confocal three-dimensional micro-X-ray fluorescence set-up combined with two individual polycapillary lenses has been used to analyse two painted pottery fragments. Nondestructive elemental depth analyses and surface topographic analysis were performed. The elemental depth profiles of Mn, Fe and Ca obtained from these measurements were consistent with those obtained using an optical microscope. The depth profiles show that there are layer structures in two samples. The images show that the distribution of Ca is approximately homogeneous in both painted and unpainted regions. In contrast, Mn appeared only in the painted regions. Meanwhile, the distributions of Fe in the painted and unpainted regions were not the same. The surface topographic shows that the pigment of dark-brown region was coated above the brown region. These conclusions allowed the painting process to be inferred.

  14. 3D Dental Scanner

    Kotek, L.

    2015-01-01

    This paper is about 3D scan of plaster dental casts. The main aim of the work is a hardware and software proposition of 3D scan system for scanning of dental casts. There were used camera, projector and rotate table for this scanning system. Surface triangulation was used, taking benefits of projections of structured light on object, which is being scanned. The rotate table is controlled by PC. The camera, projector and rotate table are synchronized by PC. Controlling of stepper motor is prov...

  15. An approach based on defense-in-depth and diversity (3D) for the reliability assessment of digital instrument and control systems of nuclear power plants

    Silva, Paulo Adriano da; Saldanha, Pedro L.C., E-mail: pasilva@cnen.gov.b, E-mail: Saldanha@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coord. Geral de Reatores Nucleares; Melo, Paulo F. Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao em Engenharia. Programa de Engenharia Nuclear; Araujo, Ademir L. de [Associacao Brasileira de Ensino Universitario (UNIABEU), Angra dos Reis, RJ (Brazil)

    2011-07-01

    The adoption of instrumentation and control (I and C) digital technology has been slower in nuclear power plants. The reason has been unfruitful efforts to obtain evidence in order to prove that I and C systems can be used in nuclear safety systems, for example, the Reactor Protection System (RPS), ensuring the proper operation of all its functions. This technology offers a potential improvement for safety and reliability. However, there still no consensus about the model to be adopted for digital systems software to be used in reliability studies. This paper presents the 3D methodology approach to assess digital I and C reliability. It is based on the study of operational events occurring in NPPs. It is easy to identify, in general, the level of I and C system reliability, showing its key vulnerabilities, enabling to trace regulatory actions to minimize or avoid them. This approach makes it possible to identify the main types of digital I and C system failure, with the potential for common cause failures as well as evaluating the dominant failure modes. The MAFIC-D software was developed to assist the implementation of the relationships between the reliability criteria, the analysis of relationships and data collection. The results obtained through this tool proved to be satisfactory and complete the process of regulatory decision-making from licensing I and C digital of NPPs and call still be used to monitor the performance of I and C digital post-licensing during the lifetime of the system, providing the basis for the elaboration of checklists of regulatory inspections. (author)

  16. A framework for inverse planning of beam-on times for 3D small animal radiotherapy using interactive multi-objective optimisation

    Advances in precision small animal radiotherapy hardware enable the delivery of increasingly complicated dose distributions on the millimeter scale. Manual creation and evaluation of treatment plans becomes difficult or even infeasible with an increasing number of degrees of freedom for dose delivery and available image data. The goal of this work is to develop an optimisation model that determines beam-on times for a given beam configuration, and to assess the feasibility and benefits of an automated treatment planning system for small animal radiotherapy.The developed model determines a Pareto optimal solution using operator-defined weights for a multiple-objective treatment planning problem. An interactive approach allows the planner to navigate towards, and to select the Pareto optimal treatment plan that yields the most preferred trade-off of the conflicting objectives. This model was evaluated using four small animal cases based on cone-beam computed tomography images. Resulting treatment plan quality was compared to the quality of manually optimised treatment plans using dose-volume histograms and metrics.Results show that the developed framework is well capable of optimising beam-on times for 3D dose distributions and offers several advantages over manual treatment plan optimisation. For all cases but the simple flank tumour case, a similar amount of time was needed for manual and automated beam-on time optimisation. In this time frame, manual optimisation generates a single treatment plan, while the inverse planning system yields a set of Pareto optimal solutions which provides quantitative insight on the sensitivity of conflicting objectives. Treatment planning automation decreases the dependence on operator experience and allows for the use of class solutions for similar treatment scenarios. This can shorten the time required for treatment planning and therefore increase animal throughput. In addition, this can improve treatment standardisation and

  17. Auto convergence for stereoscopic 3D cameras

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  18. Choice-related Activity in the Anterior Intraparietal Area during 3-D Structure Categorization.

    Verhoef, Bram-Ernst; Michelet, Pascal; Vogels, Rufin; Janssen, Peter

    2015-06-01

    The anterior intraparietal area (AIP) of macaques contains neurons that signal the depth structure of disparity-defined 3-D shapes. Previous studies have suggested that AIP's depth information is used for sensorimotor transformations related to the efficient grasping of 3-D objects. We trained monkeys to categorize disparity-defined 3-D shapes and examined whether neuronal activity in AIP may also underlie pure perceptual categorization behavior. We first show that neurons with a similar 3-D shape preference cluster in AIP. We then demonstrate that the monkeys' 3-D shape discrimination performance depends on the position in depth of the stimulus and that this performance difference is reflected in the activity of AIP neurons. We further reveal correlations between the neuronal activity in AIP and the subject's subsequent choices and RTs during 3-D shape categorization. Our findings propose AIP as an important processing stage for 3-D shape perception. PMID:25514653

  19. CNS Orientations, Safety Objectives and Implementation of the Defence in Depth Concept

    Full text: The 6th Review Meeting of the Convention on Nuclear Safety (CNS) is convened in Vienna next year for two weeks from Monday March 24th to Friday April 4th 2014. The consequences and the lessons learnt from the accident that occurred at the Fukushima Daiichi nuclear power plant will be a major issue. The 2nd Extraordinary Meeting of the CNS in August 2012 was totally devoted to the Fukushima Daiichi accident. One of its main conclusions was Conclusion 17 included in the summary report which says: ''Nuclear power plants should be designed, constructed and operated with the objectives of preventing accidents and, should an accident occur, mitigating its effects and avoiding off-site contamination. The Contracting Parties also noted that regulatory authorities should ensure that these objectives are applied in order to identify and implement appropriate safety improvements at existing plants''. The wording of the sentences of Conclusion 17 dedicated, the first one to new built reactors, the second one to existing plants, can be improved and clarified. But obviously the issue of the off-site consequences of an accident is fundamental. So the in-depth question comes: what can and should be done to achieve these safety objectives? And in particular how to improve the definition and then the implementation of the Defence in Depth Concept? From my point of view, this is clearly the main issue of this Conference. (author)

  20. Fluid migration associated with allochthonous salt in the Northern Gulf of mexico: an analysis using 3D depth migrated seismic data

    House, William H.; Pritchett, John A. [Amoco Production Co. (United States)

    1995-12-31

    The emplacement of allochthonous salt bodies in the Northern Gulf of Mexico, and their subsequent deformation to form secondary salt features involves the upward movement of salt along discrete feeder conduits. The detachment of allochthonous salt from a deeper source results in the collapse of these conduits. Structural disruption associated with this collapse creates a permeability pathway to allow enhanced fluid migration from depth into shallower section. Some of the high pressure fluids migration upward along these permeability conduits will impinge on a permeability barrier created by the horizontal to sub-horizontal base of allochthonous salt sheets. Additional high pressure fluids associated with shale compaction and dewatering will also move upward to the base of salt permeability barrier. The constant influx of high pressure fluids into the zone immediately below salt prevents the shale in this zone from undergoing normal compaction, resulting in the formation of a lithologically distinct gumbo zone. This gumbo zone has been encountered in many of the subsalt wells drilled in the Gulf of Mexico. Abnormally high pore pressures are often associated with this gumbo zone beneath the salt sheets covering the southern shelf area, offshore Louisiana. Formation pressure gradients within this zone can be as much as 0.04 psi/ft (0.8 ppg) above the regional pressure gradient. (author). 4 refs., 1 fig

  1. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  2. An analysis of TA-Student Interaction and the Development of Concepts in 3-d Space Through Language, Objects, and Gesture in a College-level Geoscience Laboratory

    King, S. L.

    2015-12-01

    The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural

  3. 3D augmented reality with integral imaging display

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  4. Tangible 3D modeling of coherent and themed structures

    Walther, Jeppe Ullè; Bærentzen, J. Andreas; Aanæs, Henrik

    2016-01-01

    We present CubeBuilder, a system for interactive, tangible 3D shape modeling. CubeBuilder allows the user to create a digital 3D model by placing physical, non-interlocking cubic blocks. These blocks may be placed in a completely arbitrary fashion and combined with other objects. In effect, this...... turns the task of 3D modeling into a playful activity that hardly requires any learning on the part of the user. The blocks are registered using a depth camera and entered into the cube graph where each block is a node and adjacent blocks are connected by edges. From the cube graph, we transform the...

  5. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  6. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  7. 3D video

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  8. 3D Animation Essentials

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  9. Automatic detection of artifacts in converted S3D video

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  10. Calibrating a depth camera but ignoring it for SLAM

    Castro, Daniel Herrera

    2014-01-01

    Recent improvements in resolution, accuracy, and cost have made depth cameras a very popular alternative for 3D reconstruction and navigation. Thus, accurate depth camera calibration a very relevant aspect of many 3D pipelines. We explore what are the limits of a practical depth camera calibration algorithm: how to accurately calibrate a noisy depth camera without a precise calibration object and without using brightness or depth discontinuities. We present an algorithm that uses an external ...

  11. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  12. Joint spatial-depth feature pooling for RGB-D object classification

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    RGB-D camera can provide effective support with additional depth cue for many RGB-D perception tasks beyond traditional RGB information. However, current feature representations based on RGB-D camera utilize depth information only to extract local features, without considering it for the improvem......RGB-D camera can provide effective support with additional depth cue for many RGB-D perception tasks beyond traditional RGB information. However, current feature representations based on RGB-D camera utilize depth information only to extract local features, without considering it for the...

  13. Photopolymers in 3D printing applications

    Pandey, Ramji

    2014-01-01

    3D printing is an emerging technology with applications in several areas. The flexibility of the 3D printing system to use variety of materials and create any object makes it an attractive technology. Photopolymers are one of the materials used in 3D printing with potential to make products with better properties. Due to numerous applications of photopolymers and 3D printing technologies, this thesis is written to provide information about the various 3D printing technologies with particul...

  14. Natural fibre composites for 3D Printing

    Pandey, Kapil

    2015-01-01

    3D printing has been common option for prototyping. Not all the materials are suitable for 3D printing. Various studies have been done and still many are ongoing regarding the suitability of the materials for 3D printing. This thesis work discloses the possibility of 3D printing of certain polymer composite materials. The main objective of this thesis work was to study the possibility for 3D printing the polymer composite material composed of natural fibre composite and various different ...

  15. 3D modelling for multipurpose cadastre

    Abduhl Rahman, A.; Van Oosterom, P.J.M.; Hua, T.C.; Sharkawi, K.H.; Duncan, E.E.; Azri, N.; Hassan, M.I.

    2012-01-01

    Three-dimensional (3D) modelling of cadastral objects (such as legal spaces around buildings, around utility networks and other spaces) is one of the important aspects for a multipurpose cadastre (MPC). This paper describes the 3D modelling of the objects for MPC and its usage to the knowledge of 3D

  16. Labeling 3D scenes for Personal Assistant Robots

    Koppula, Hema Swetha; Anand, Abhishek; Joachims, Thorsten; Saxena, Ashutosh

    2011-01-01

    Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. We use this data to build 3D point clouds of a full scene. In this paper, we address the task of labeling objects in this 3D point cloud of a complete indoor scene such as an office. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a...

  17. Color 3D Reverse Engineering

    2002-01-01

    This paper presents a principle and a method of col or 3D laser scanning measurement. Based on the fundamental monochrome 3D measureme nt study, color information capture, color texture mapping, coordinate computati on and other techniques are performed to achieve color 3D measurement. The syste m is designed and composed of a line laser light emitter, one color CCD camera, a motor-driven rotary filter, a circuit card and a computer. Two steps in captu ring object's images in the measurement process: Firs...

  18. Solid works 3D

    This book explains modeling of solid works 3D and application of 3D CAD/CAM. The contents of this book are outline of modeling such as CAD and 2D and 3D, solid works composition, method of sketch, writing measurement fixing, selecting projection, choosing condition of restriction, practice of sketch, making parts, reforming parts, modeling 3D, revising 3D modeling, using pattern function, modeling necessaries, assembling, floor plan, 3D modeling method, practice floor plans for industrial engineer data aided manufacturing, processing of CAD/CAM interface.

  19. 3D PHOTOGRAPHS IN CULTURAL HERITAGE

    Schuhr, W.; J. D. Lee; Kiel, St.

    2013-01-01

    This paper on providing "oo-information" (= objective object-information) on cultural monuments and sites, based on 3D photographs is also a contribution of CIPA task group 3 to the 2013 CIPA Symposium in Strasbourg. To stimulate the interest in 3D photography for scientists as well as for amateurs, 3D-Masterpieces are presented. Exemplary it is shown, due to their high documentary value ("near reality"), 3D photography support, e.g. the recording, the visualization, the interpret...

  20. 3D proton beam micromachining

    Focused high energy ion beam micromachining is the newest of the micromachining techniques. There are about 50 scanning proton microprobe facilities worldwide, but so far only few of them showed activity in this promising field. High energy ion beam micromachining using a direct-write scanning MeV ion beam is capable of producing 3D microstructures and components with well defined lateral and depth geometry. The technique has high potential in the manufacture of 3D molds, stamps, and masks for X-ray lithography (LIGA), and also in the rapid prototyping of microcomponents either for research purposes or for components testing prior to batch production. (R.P.)

  1. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  2. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  3. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  4. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    Buning, P. G.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  5. Open 3D Projects

    Felician ALECU

    2010-01-01

    Full Text Available Many professionals and 3D artists consider Blender as being the best open source solution for 3D computer graphics. The main features are related to modeling, rendering, shading, imaging, compositing, animation, physics and particles and realtime 3D/game creation.

  6. Volumetric 3D Display System with Static Screen

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  7. Cardiac dosimetry for adjuvant left-sided breast radiotherapy; patterns with 2D- versus 3D-era planning and correlates of coronary dose with maximum depth of myocardial exposure

    The purpose of this study was to evaluate the cardiac dosimetry delivered before and after routine 3D CT whole-breast radiotherapy planning, including cardiac contouring and the relevance of a 15-mm maximum myocardial depth (MMD) planning tolerance threshold. The PULp FICTion study permitted cardiac dosimetry comparisons for 140 patients (70 in the 'before-contouring era' (BC) and 70 in the 'post-contouring era' (PC)). Comparisons were made of MMD and dosimetry for whole heart, anterior myocardium and left anterior descending (LAD)/coronary artery (overall, superior and inferior) by contouring era. The MMD mean was 15.6mm (range 1-40). If the internal mammary chain (IMC) was treated, the MMD increased from 15 to 27.7mm (P15mm, and the proportion of patients with a mean dose <40% of the prescribed breast dose fell from 48% to 8%. Changes in cardiac dosimetry associated with routine cardiac contouring have initially been minor and restricted to low-risk patients. A 15-mm MMD reasonably represents a transition from low mean distal LAD doses to substantial doses.

  8. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  9. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  10. 3D annotation and manipulation of medical anatomical structures

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  11. 3d-3d correspondence revisited

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  12. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  13. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  14. IZDELAVA TISKALNIKA 3D

    Brdnik, Lovro

    2015-01-01

    Diplomsko delo analizira trenutno stanje 3D tiskalnikov na trgu. Prikazan je razvoj in principi delovanja 3D tiskalnikov. Predstavljeni so tipi 3D tiskalnikov, njihove prednosti in slabosti. Podrobneje je predstavljena zgradba in delovanje koračnih motorjev. Opravljene so meritve koračnih motorjev. Opisana je programska oprema za rokovanje s 3D tiskalniki in komponente, ki jih potrebujemo za izdelavo. Diploma se oklepa vprašanja, ali je izdelava 3D tiskalnika bolj ekonomična kot pa naložba v ...

  15. Influence of object location in cone beam computed tomography (NewTom 5G and 3D Accuitomo 170) on gray value measurements at an implant site

    A. Parsa; N. Ibrahim; B. Hassan; P. van der Stelt; D. Wismeijer

    2014-01-01

    Objectives The aim of this study was to determine the gray value variation at an implant site with different object location within the selected field of view (FOV) in two cone beam computed tomography (CBCT) scanners. Methods A 1-cm-thick section from the edentulous region of a dry human mandible w

  16. Single-shot 3D motion picture camera with a dense point cloud

    Willomitzer, Florian

    2016-01-01

    We introduce a method and a 3D-camera for single-shot 3D shape measurement, with unprecedented features: The 3D-camera does not rely on pattern codification and acquires object surfaces at the theoretical limit of the information efficiency: Up to 30% of the available camera pixels display independent (not interpolated) 3D points. The 3D-camera is based on triangulation with two properly positioned cameras and a projected multi-line pattern, in combination with algorithms that solve the ambiguity problem. The projected static line pattern enables 3D-acquisition of fast processes and the take of 3D-motion-pictures. The depth resolution is at its physical limit, defined by electronic noise and speckle noise. The requisite low cost technology is simple.

  17. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  18. Remote 3D Medical Consultation

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  19. 3D Printing Making the Digital Real .

    Miss Prachi More

    2013-07-01

    Full Text Available 3D printing is quickly expanding field, with the popularity and uses for 3D printers growing every day. 3D printing can be used to prototype, create replacement parts, and is even versatile enough to print prostheses and medical implants. It will have a growing impact on our world, as more and more people gain access to these amazing machines.[1] In this article, we would like to attempt to give an introduction of the technology. 3Dimensions printing is a method of converting a virtual 3D model into a physical object. 3D printing is a category of rapid prototyping technology. 3D printers typically work by printing successive layers on top of the previous to build up a three dimensional object. 3D printing is a revolutionary method for creating 3D models with the use of inkjet technology.[7

  20. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  1. Crowded Field 3D Spectroscopy

    Becker, T; Roth, M M; Becker, Thomas; Fabrika, Sergei; Roth, Martin M.

    2003-01-01

    The quantitative spectroscopy of stellar objects in complex environments is mainly limited by the ability of separating the object from the background. Standard slit spectroscopy, restricting the field of view to one dimension, is obviously not the proper technique in general. The emerging Integral Field (3D) technique with spatially resolved spectra of a two-dimensional field of view provides a great potential for applying advanced subtraction methods. In this paper an image reconstruction algorithm to separate point sources and a smooth background is applied to 3D data. Several performance tests demonstrate the photometric quality of the method. The algorithm is applied to real 3D observations of a sample Planetary Nebula in M31, whose spectrum is contaminated by the bright and complex galaxy background. The ability of separating sources is also studied in a crowded stellar field in M33.

  2. 3D laptop for defense applications

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  3. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    Yakang Dai; Jian Zheng; Yuetao Yang; Duojie Kuai; Xiaodong Yang

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume render...

  4. 3D virtuel udstilling

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  5. Underwater 3D filming

    Roberto Rinaldi

    2014-12-01

    Full Text Available After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be.

  6. Clinical Assessment of Stereoacuity and 3-D Stereoscopic Entertainment

    Tidbury, Laurence P.; Black, Robert H.; O’Connor, Anna R.

    2015-01-01

    Abstract Background/Aims: The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS. Methods: Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips. Results: As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16–62 years) decreased (eg, 110”, 280”, 340”, and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident. Conclusion: If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made. PMID:26669421

  7. Sliding Adjustment for 3D Video Representation

    Galpin Franck

    2002-01-01

    Full Text Available This paper deals with video coding of static scenes viewed by a moving camera. We propose an automatic way to encode such video sequences using several 3D models. Contrary to prior art in model-based coding where 3D models have to be known, the 3D models are automatically computed from the original video sequence. We show that several independent 3D models provide the same functionalities as one single 3D model, and avoid some drawbacks of the previous approaches. To achieve this goal we propose a novel algorithm of sliding adjustment, which ensures consistency of successive 3D models. The paper presents a method to automatically extract the set of 3D models and associate camera positions. The obtained representation can be used for reconstructing the original sequence, or virtual ones. It also enables 3D functionalities such as synthetic object insertion, lightning modification, or stereoscopic visualization. Results on real video sequences are presented.

  8. Mobile 3D tomograph

    Mobile tomographs often have the problem that high spatial resolution is impossible owing to the position or setup of the tomograph. While the tree tomograph developed by Messrs. Isotopenforschung Dr. Sauerwein GmbH worked well in practice, it is no longer used as the spatial resolution and measuring time are insufficient for many modern applications. The paper shows that the mechanical base of the method is sufficient for 3D CT measurements with modern detectors and X-ray tubes. CT measurements with very good statistics take less than 10 min. This means that mobile systems can be used, e.g. in examinations of non-transportable cultural objects or monuments. Enhancement of the spatial resolution of mobile tomographs capable of measuring in any position is made difficult by the fact that the tomograph has moving parts and will therefore have weight shifts. With the aid of tomographies whose spatial resolution is far higher than the mechanical accuracy, a correction method is presented for direct integration of the Feldkamp algorithm

  9. New portable FELIX 3D display

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  10. User-centered 3D geovisualisation

    Nielsen, Anette Hougaard

    2004-01-01

    3D Geovisualisation is a multidisciplinary science mainly utilizing geographically related data, developing software systems for 3D visualisation and producing relevant models. In this paper the connection between geoinformation stored as 3D objects and the end user is of special interest....... In a broader perspective, the overall aim is to develop a language in 3D Geovisualisation gained through usability projects and the development of a theoretical background. A conceptual level of user-centered 3D Geovisualisation is introduced by applying a categorisation originating from Virtual Reality....... The conceptual level is used to structure and organise user-centered 3D Geovisualisation into four categories: representation, rendering, interface and interaction. The categories reflect a process of development of 3D Geovisualisation where objects can be represented verisimilar to the real world...

  11. Simultaneous Estimation of Material Properties and Pose for Deformable Objects from Depth and Color Images

    Fugl, Andreas Rune; Jordt, Andreas; Petersen, Henrik Gordon;

    2012-01-01

    In this paper we consider the problem of estimating 6D pose and material properties of a deformable object grasped by a robot grip- per. To estimate the parameters we minimize an error function incorpo- rating visual and physical correctness. Through simulated and real-world experiments we demons...

  12. FastScript3D - A Companion to Java 3D

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  13. Markerless 3D Face Tracking

    Walder, Christian; Breidt, Martin; Bulthoff, Heinrich;

    2009-01-01

    We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently...... combining local regressors using nearest neighbor searches. Both these functions act on the 4D space of 3D plus time, and use temporal information to handle the noise in individual scans. After interactive registration of a template mesh to the first frame, it is then automatically deformed to track...... the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects...

  14. Dimensional accuracy of 3D printed vertebra

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  15. Blender 3D cookbook

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  16. Increasing the depth of field in an LWIR system for improved object identification

    Kubala, Kenneth S.; Wach, Hans B.; Chumachenko, Vladislav V.; Dowski, Edward R., Jr.

    2005-05-01

    In a long wave infrared (LWIR) system there is the need to capture the maximum amount of information of objects over a broad volume for the identification and classification by the human or machine observer. In a traditional imaging system the optics limit the capture of this information to a narrow object volume. This limitation can hinder the observer's ability to navigate and/or identify friend or foe in combat or civilian operations. By giving the observer a larger volume of clear imagery their ability to perform will drastically improve. The system presented allows the efficient capture of object information over a broad volume and is enabled by a technology called Wavefront Coding. A Wavefront Coded system employs the joint optimization of the optics, detection and signal processing. Through a specialized design of the system"s optical phase, the system becomes invariant to the aberrations that traditionally limit the effective volume of clear imagery. In the process of becoming invariant, the specialized phase creates a uniform blur across the detected image. Signal processing is applied to remove the blur, resulting in a high quality image. A device specific noise model is presented that was developed for the optimization and accurate simulation of the system. Additionally, still images taken from a video feed from the as-built system are shown, allowing the side by side comparison of a Wavefront Coded and traditional imaging system.

  17. 3-D Imaging Systems for Agricultural Applications—A Review

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  18. 3-D Imaging Systems for Agricultural Applications-A Review.

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  19. 3-D Imaging Systems for Agricultural Applications—A Review

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  20. 3D Digital Modelling

    Hundebøl, Jesper

    wave of new building information modelling tools demands further investigation, not least because of industry representatives' somewhat coarse parlance: Now the word is spreading -3D digital modelling is nothing less than a revolution, a shift of paradigm, a new alphabet... Research qeustions. Based...... on empirical probes (interviews, observations, written inscriptions) within the Danish construction industry this paper explores the organizational and managerial dynamics of 3D Digital Modelling. The paper intends to - Illustrate how the network of (non-)human actors engaged in the promotion (and arrest) of 3......D Modelling (in Denmark) stabilizes - Examine how 3D Modelling manifests itself in the early design phases of a construction project with a view to discuss the effects hereof for i.a. the management of the building process. Structure. The paper introduces a few, basic methodological concepts...

  1. Optical tissue clearing improves usability of optical coherence tomography (OCT) for high-throughput analysis of the internal structure and 3D morphology of small biological objects such as vertebrate embryos

    Thrane, Lars; Jørgensen, Thomas Martini; Männer, Jörg

    2014-01-01

    sections through small biological objects at high resolutions. However, due to light scattering within biological tissues, the quality of OCT images drops significantly with increasing penetration depth of the light beam. We show that optical clearing of fixed embryonic organs with methyl benzoate can...

  2. Professional Papervision3D

    Lively, Michael

    2010-01-01

    Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository.

  3. Are 3-D Movies Bad for Your Eyes?

    Full Text Available ... to wonder what, if any, effect the technology has on your eyes. Is 3-D technology healthy ... 3-D, which may indicate that the viewer has a problem with focusing or depth perception. Also, ...

  4. A Stereo Vision Framework for 3-D Underwater Mosaicking

    Leone, A.; Diraco, G.; Distante, C.

    2008-01-01

    A framework for seabed 3-D mosaic reconstruction has been presented. The three mainly troublesome aspects discussed are asynchronous stereo acquisition, depth estimation and the 3-D mosaic registration. The use of an inexpensive asynchronous stereo sequence is explained,

  5. Image Reconstruction from 2D stack of MRI/CT to 3D using Shapelets

    Arathi T

    2014-12-01

    Full Text Available Image reconstruction is an active research field, due to the increasing need for geometric 3D models in movie industry, games, virtual environments and in medical fields. 3D image reconstruction aims to arrive at the 3D model of an object, from its 2D images taken at different viewing angles. Medical images are multimodal, which includes MRI, CT scan image, PET and SPECT images. Of these, MRI and CT scan images of an organ when taken, is available as a stack of 2D images, taken at different angles. This 2D stack of images is used to get a 3D view of the organ of interest, to aid doctors in easier diagnosis. Existing 3D reconstruction techniques are voxel based techniques, which tries to reconstruct the 3D view based on the intensity value stored at each voxel location. These techniques don’t make use of the shape/depth information available in the 2D image stack. In this work, a 3D reconstruction technique for MRI/CT 2D image stack, based on Shapelets has been proposed. Here, the shape/depth information available in each 2D image in the image stack is manipulated to get a 3D reconstruction, which gives a more accurate 3D view of the organ of interest. Experimental results exhibit the efficiency of this proposed technique.

  6. From 3D view to 3D print

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  7. 3D modelling for multipurpose cadastre

    Abduhl Rahman, A.; P. J. M. Van Oosterom; T. C. Hua; Sharkawi, K.H.; E. E. Duncan; Azri, N.; Hassan, M. I.

    2012-01-01

    Three-dimensional (3D) modelling of cadastral objects (such as legal spaces around buildings, around utility networks and other spaces) is one of the important aspects for a multipurpose cadastre (MPC). This paper describes the 3D modelling of the objects for MPC and its usage to the knowledge of 3D cadastre since more and more related agencies attempt to develop or embed 3D components into the MPC. We also intend to describe the initiative by Malaysian national mapping and cadastral agency (...

  8. 3D-PRINTING OF BUILD OBJECTS

    M. V. Savytskyi; SHATOV S. V.; Ozhyshchenko, O. A.

    2016-01-01

    Raising of problem. Today, in all spheres of our life we can constate the permanent search for new, modern methods and technologies that meet the principles of sustainable development. New approaches need to be, on the one hand more effective in terms of conservation of exhaustible resources of our planet, have minimal impact on the environment and on the other hand to ensure a higher quality of the final product. Construction is not exception. One of the new promising technology ...

  9. A Visual Similarity-Based 3D Search Engine

    Lmaati, Elmustapha Ait; Oirrak, Ahmed El; M.N. Kaddioui

    2009-01-01

    Retrieval systems for 3D objects are required because 3D databases used around the web are growing. In this paper, we propose a visual similarity based search engine for 3D objects. The system is based on a new representation of 3D objects given by a 3D closed curve that captures all information about the surface of the 3D object. We propose a new 3D descriptor, which is a combination of three signatures of this new representation, and we implement it in our interactive web based search engin...

  10. Assessing 3D scan quality through paired-comparisons psychophysics test

    Thorn, Jacob; Pizarro, Rodrigo; Spanlang, Bernhard; Bermell-Garcia, Pablo; Gonzalez-Franco, Mar

    2016-01-01

    Consumer 3D scanners and depth cameras are increasingly being used to generate content and avatars for Virtual Reality (VR) environments and avoid the inconveniences of hand modeling; however, it is sometimes difficult to evaluate quantitatively the mesh quality at which 3D scans should be exported, and whether the object perception might be affected by its shading. We propose using a paired-comparisons test based on psychophysics of perception to do that evaluation. As psychophysics is not s...

  11. 3D Spectroscopic Instrumentation

    Bershady, Matthew A

    2009-01-01

    In this Chapter we review the challenges of, and opportunities for, 3D spectroscopy, and how these have lead to new and different approaches to sampling astronomical information. We describe and categorize existing instruments on 4m and 10m telescopes. Our primary focus is on grating-dispersed spectrographs. We discuss how to optimize dispersive elements, such as VPH gratings, to achieve adequate spectral resolution, high throughput, and efficient data packing to maximize spatial sampling for 3D spectroscopy. We review and compare the various coupling methods that make these spectrographs ``3D,'' including fibers, lenslets, slicers, and filtered multi-slits. We also describe Fabry-Perot and spatial-heterodyne interferometers, pointing out their advantages as field-widened systems relative to conventional, grating-dispersed spectrographs. We explore the parameter space all these instruments sample, highlighting regimes open for exploitation. Present instruments provide a foil for future development. We give an...

  12. Herramientas SIG 3D

    Francisco R. Feito Higueruela

    2010-04-01

    Full Text Available Applications of Geographical Information Systems on several Archeology fields have been increasing during the last years. Recent avances in these technologies make possible to work with more realistic 3D models. In this paper we introduce a new paradigm for this system, the GIS Thetrahedron, in which we define the fundamental elements of GIS, in order to provide a better understanding of their capabilities. At the same time the basic 3D characteristics of some comercial and open source software are described, as well as the application to some samples on archeological researchs

  13. Bootstrapping 3D fermions

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  14. Interaktiv 3D design

    Villaume, René Domine; Ørstrup, Finn Rude

    2002-01-01

    Projektet undersøger potentialet for interaktiv 3D design via Internettet. Arkitekt Jørn Utzons projekt til Espansiva blev udviklet som et byggesystem med det mål, at kunne skabe mangfoldige planmuligheder og mangfoldige facade- og rumudformninger. Systemets bygningskomponenter er digitaliseret som...... 3D elementer og gjort tilgængelige. Via Internettet er det nu muligt at sammenstille og afprøve en uendelig  række bygningstyper som  systemet blev tænkt og udviklet til....

  15. TOWARDS: 3D INTERNET

    Ms. Swapnali R. Ghadge

    2013-01-01

    In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot o...

  16. Spatial data modelling for 3D GIS

    Abdul-Rahman, Alias

    2007-01-01

    This book covers fundamental aspects of spatial data modelling specifically on the aspect of three-dimensional (3D) modelling and structuring. Realisation of ""true"" 3D GIS spatial system needs a lot of effort, and the process is taking place in various research centres and universities in some countries. The development of spatial data modelling for 3D objects is the focus of this book.

  17. Tangible 3D Modelling

    Hejlesen, Aske K.; Ovesen, Nis

    2012-01-01

    This paper presents an experimental approach to teaching 3D modelling techniques in an Industrial Design programme. The approach includes the use of tangible free form models as tools for improving the overall learning. The paper is based on lecturer and student experiences obtained through...

  18. Shaping 3-D boxes

    Stenholt, Rasmus; Madsen, Claus B.

    2011-01-01

    Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data...

  19. 3D Harmonic Echocardiography:

    M.M. Voormolen

    2007-01-01

    textabstractThree dimensional (3D) echocardiography has recently developed from an experimental technique in the ’90 towards an imaging modality for the daily clinical practice. This dissertation describes the considerations, implementation, validation and clinical application of a unique

  20. Labeling 3D scenes for Personal Assistant Robots

    Koppula, Hema Swetha; Joachims, Thorsten; Saxena, Ashutosh

    2011-01-01

    Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. We use this data to build 3D point clouds of a full scene. In this paper, we address the task of labeling objects in this 3D point cloud of a complete indoor scene such as an office. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. The model admits efficient approximate inference, and we train it using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views, having 2495 segments labeled with 27 object classes), we get a performance of 84.06% in labeling 17 object classes for offices, and 73.38% in labeling 17 object classe...

  1. Parametrizable cameras for 3D computational steering

    Mulder, J.D.; Wijk, J.J. van

    1997-01-01

    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical man

  2. Can 3D Printing change your business?

    Unver, Ertu

    2013-01-01

    This presentation is given to businesses / companies with an interest in 3D Printing and Additive Manufacturing in West Yorkshire, UK Organised by the Calderdale and Kirklees Manufacturing Alliance. http://www.ckma.co.uk/ by Dr Ertu Unver Senior Lecturer / Product Design / MA 3D Digital Design / University of Huddersfield Location : 3M BIC, Date : 11th April, Time : 5.30 – 8pm Additive manufacturing or 3D printing is a process of making a three-dimensional (3D) objects from...

  3. A perceptual preprocess method for 3D-HEVC

    Shi, Yawen; Wang, Yongfang; Wang, Yubing

    2015-08-01

    A perceptual preprocessing method for 3D-HEVC coding is proposed in the paper. Firstly we proposed a new JND model, which accounts for luminance contrast masking effect, spatial masking effect, and temporal masking effect, saliency characteristic as well as depth information. We utilize spectral residual approach to obtain the saliency map and built a visual saliency factor based on saliency map. In order to distinguish the sensitivity of objects in different depth. We segment each texture frame into foreground and background by a automatic threshold selection algorithm using corresponding depth information, and then built a depth weighting factor. A JND modulation factor is built with a linear combined with visual saliency factor and depth weighting factor to adjust the JND threshold. Then, we applied the proposed JND model to 3D-HEVC for residual filtering and distortion coefficient processing. The filtering process is that the residual value will be set to zero if the JND threshold is greater than residual value, or directly subtract the JND threshold from residual value if JND threshold is less than residual value. Experiment results demonstrate that the proposed method can achieve average bit rate reduction of 15.11%, compared to the original coding scheme with HTM12.1, while maintains the same subjective quality.

  4. 3D Printed Robotic Hand

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  5. Forensic 3D Scene Reconstruction

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  6. Forensic 3D Scene Reconstruction

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  7. FROM 3D MODEL DATA TO SEMANTICS

    My Abdellah Kassimi

    2012-01-01

    Full Text Available The semantic-based 3D models retrieval systems have become necessary since the increase of 3D modelsdatabases. In this paper, we propose a new method for the mapping problem between 3D model data andsemantic data involved in semantic based retrieval for 3D models given by polygonal meshes. First, wefocused on extracting invariant descriptors from the 3D models and analyzing them to efficient semanticannotation and to improve the retrieval accuracy. Selected shape descriptors provide a set of termscommonly used to describe visually a set of objects using linguistic terms and are used as semanticconcept to label 3D model. Second, spatial relationship representing directional, topological anddistance relationships are used to derive other high-level semantic features and to avoid the problem ofautomatic 3D model annotation. Based on the resulting semantic annotation and spatial concepts, anontology for 3D model retrieval is constructed and other concepts can be inferred. This ontology is usedto find similar 3D models for a given query model. We adopted the query by semantic example approach,in which the annotation is performed mostly automatically. The proposed method is implemented in our3D search engine (SB3DMR, tested using the Princeton Shape Benchmark Database.

  8. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  9. 3D animace

    Klusoň, Jindřich

    2010-01-01

    Computer animation has a growing importance and application in the world. With expansion of technologies increases quality of the final animation as well as number of 3D animation software. This thesis is currently mapped animation software for creating animation in film, television industry and video games which are advisable users requirements. Of them were selected according to criteria the best - Autodesk Maya 2011. This animation software is unique with tools for creating special effects...

  10. Are 3-D Movies Bad for Your Eyes?

    Full Text Available ... viewer has a problem with focusing or depth perception. Also, the techniques used to create the 3- ... or other conditions that persistently inhibit focusing, depth perception or normal 3-D vision, would have difficulty ...

  11. Monocular 3D see-through head-mounted display via complex amplitude modulation.

    Gao, Qiankun; Liu, Juan; Han, Jian; Li, Xin

    2016-07-25

    The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display. PMID:27464184

  12. X3D Interoperability and X3D Progress, Common Problems versus Stable Growth [video

    Tourtelotte, Dale R.; Brutzman, Don

    2010-01-01

    In large measure, the vision of making it easier to create and use 3D spatial data has been achieved through The Extensible 3D (X3D) Earth project. This project created a standards-based 3D visualization infrastructure for visualizing all manner of real-world objects and information constructs in a geospatial context. The ability to archive models using stable commercial tools and noncommercial international standards ensures that 3D work can remain accessible and repeatable for many years to...

  13. Influence of hand position on the near-effect in 3D attention

    Pollux, Petra; Bourke, Patrick

    2008-01-01

    Voluntary reorienting of attention in real depth situations is characterized by an attentional bias to locations near the viewer once attention is deployed to a spatially cued object in depth. Previously this effect (initially referred to as the ‘near-effect’) was attributed to access of a 3D viewer-centred spatial representation for guiding attention in 3D space. The aim of this study was to investigate whether the near-bias could have been associated with the position of the response-hand, ...

  14. Interobserver variation in measurements of Cesarean scar defect and myometrium with 3D ultrasonography

    Madsen, Lene Duch; Glavind, Julie; Uldbjerg, Niels; Dueholm, Margit

    Objectives: To evaluate the Cesarean scar defect depth and the residual myometrial thickness with 3-dimensional (3D) sonography concerning interobserver variation. Methods: Ten women were randomly selected from a larger cohort of Cesarean scar ultrasound evaluations. All women were examined 6......-16 months after their first Cesarean section with 2D transvaginal sonography and had 3D volumes recorded. Two observers independently evaluated “off-line” each of the 3D volumes stored. Residual myometrial thickness (RMT) and Cesarean scar defect depth (D) was measured in the sagittal plane with an interval...... of 1mm across the entire width of the endometrium. RMT was defined as the shortest distance from the scar defect to the uterine serosa among all RMT measures, and D was defined similarly as the largest depth of the scar defect extending from the uterine cavity. The median value for RMT and D for each...

  15. Automatic balancing of 3D models

    Christiansen, Asger Nyman; Schmidt, Ryan; Bærentzen, Jakob Andreas

    2014-01-01

    3D printing technologies allow for more diverse shapes than are possible with molds and the cost of making just one single object is negligible compared to traditional production methods. However, not all shapes are suitable for 3D print. One of the remaining costs is therefore human time spent......, in these cases, we will apply a rotation of the object which only deforms the shape a little near the base. No user input is required but it is possible to specify manufacturing constraints related to specific 3D print technologies. Several models have successfully been balanced and printed using both polyjet...

  16. Massive 3D Supergravity

    Andringa, Roel; de Roo, Mees; Hohm, Olaf; Sezgin, Ergin; Townsend, Paul K

    2009-01-01

    We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered `massive 3D gravity'. Another is a `new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet.

  17. Massive 3D supergravity

    Andringa, Roel; Bergshoeff, Eric A; De Roo, Mees; Hohm, Olaf [Centre for Theoretical Physics, University of Groningen, Nijenborgh 4, 9747 AG Groningen (Netherlands); Sezgin, Ergin [George and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Townsend, Paul K, E-mail: E.A.Bergshoeff@rug.n, E-mail: O.Hohm@rug.n, E-mail: sezgin@tamu.ed, E-mail: P.K.Townsend@damtp.cam.ac.u [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)

    2010-01-21

    We construct the N=1 three-dimensional supergravity theory with cosmological, Einstein-Hilbert, Lorentz Chern-Simons, and general curvature squared terms. We determine the general supersymmetric configuration, and find a family of supersymmetric adS vacua with the supersymmetric Minkowski vacuum as a limiting case. Linearizing about the Minkowski vacuum, we find three classes of unitary theories; one is the supersymmetric extension of the recently discovered 'massive 3D gravity'. Another is a 'new topologically massive supergravity' (with no Einstein-Hilbert term) that propagates a single (2,3/2) helicity supermultiplet.

  18. Handbook of 3D machine vision optical metrology and imaging

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  19. TOWARDS: 3D INTERNET

    Ms. Swapnali R. Ghadge

    2013-08-01

    Full Text Available In today’s ever-shifting media landscape, it can be a complex task to find effective ways to reach your desired audience. As traditional media such as television continue to lose audience share, one venue in particular stands out for its ability to attract highly motivated audiences and for its tremendous growth potential the 3D Internet. The concept of '3D Internet' has recently come into the spotlight in the R&D arena, catching the attention of many people, and leading to a lot of discussions. Basically, one can look into this matter from a few different perspectives: visualization and representation of information, and creation and transportation of information, among others. All of them still constitute research challenges, as no products or services are yet available or foreseen for the near future. Nevertheless, one can try to envisage the directions that can be taken towards achieving this goal. People who take part in virtual worlds stay online longer with a heightened level of interest. To take advantage of that interest, diverse businesses and organizations have claimed an early stake in this fast-growing market. They include technology leaders such as IBM, Microsoft, and Cisco, companies such as BMW, Toyota, Circuit City, Coca Cola, and Calvin Klein, and scores of universities, including Harvard, Stanford and Penn State.

  20. 3-D SAR image formation from sparse aperture data using 3-D target grids

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  1. Multiplane 3D superresolution optical fluctuation imaging

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  2. 3D Gravity Inversion using Tikhonov Regularization

    Toushmalani Reza

    2015-08-01

    Full Text Available Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region

  3. 3D Video Compression and Transmission

    Zamarin, Marco; Forchhammer, Søren

    In this short paper we provide a brief introduction to 3D and multi-view video technologies - like three-dimensional television and free-viewpoint video - focusing on the aspects related to data compression and transmission. Geometric information represented by depth maps is introduced as well...

  4. Improving depth maps with limited user input

    Vandewalle, Patrick; Klein Gunnewiek, René; Varekamp, Chris

    2010-02-01

    A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance. In this paper, we discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, we will elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool.

  5. Direct 3D Painting with a Metaball-Based Paintbrush

    WAN Huagen; JIN Xiaogang; BAO Hujun

    2000-01-01

    This paper presents a direct 3D painting algorithm for polygonal models in 3D object-space with a metaball-based paintbrush in virtual environment.The user is allowed to directly manipulate the parameters used to shade the surface of the 3D shape by applying the pigment to its surface with direct 3D manipulation through a 3D flying mouse.

  6. Technical illustration based on 3D CSG models

    GENG Wei-dong; DING Lei; YU Hong-feng; PAN Yun-he

    2005-01-01

    This paper presents an automatic non-photorealistic rendering approach to generating technical illustration from 3D models. It first decomposes the 3D object into a set of CSG primitives, and then performs the hidden surface removal based on the prioritized list, in which the rendition order of CSG primitives is sorted out by depth. Then, each primitive is illustrated by the pre-defined empirical lighting model, and the system mimics the stroke-drawing by user-specified style. In order to artistically and flexibly modulate the illumination, the empirical lighting model is defined by three major components: parameters of multi-level lighting intensities, parametric spatial occupations for each lighting level, and an interpolation method to calculate the lighting units into the spatial occupation of CSG primitives, instead of"pixel-by-pixel" painting. This region-by-region shading facilitates the simulation of illustration styles.

  7. RAG-3D: a search tool for RNA 3D substructures.

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-10-30

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D-a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool-designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  8. 3D Visualization Development of SIUE Campus

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  9. Matching Feature Points in 3D World

    Avdiu, Blerta

    2012-01-01

    This thesis work deals with the most actual topic in Computer Vision field which is scene understanding and this using matching of 3D feature point images. The objective is to make use of Saab’s latest breakthrough in extraction of 3D feature points, to identify the best alignment of at least two 3D feature point images. The thesis gives a theoretical overview of the latest algorithms used for feature detection, description and matching. The work continues with a brief description of the simu...

  10. Robot Arms with 3D Vision Capabilities

    Borangiu, Theodor; Alexandru DUMITRACHE

    2010-01-01

    This chapter presented two applications of 3D vision in industrial robotics. The first one allows 3D reconstruction of decorative objects using a laser-based profile scanner mounted on a 6-DOF industrial robot arm, while the scanned part is placed on a rotary table. The second application uses the same profile scanner for 3D robot guidance along a complex path, which is learned automatically using the laser sensor and then followed using a physical tool. While the laser sensor is an expensive...

  11. Computer Modelling of 3D Geological Surface

    Kodge B. G.

    2011-02-01

    Full Text Available The geological surveying presently uses methods and tools for the computer modeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.

  12. Computer Modelling of 3D Geological Surface

    Kodge, B G

    2011-01-01

    The geological surveying presently uses methods and tools for the computer modeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.

  13. Measuring Visual Closeness of 3-D Models

    Morales, Jose A.

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  14. New approach to the perception of 3D shape based on veridicality, complexity, symmetry and volume.

    Pizlo, Zygmunt; Sawada, Tadamasa; Li, Yunfeng; Kropatsch, Walter G; Steinman, Robert M

    2010-01-01

    This paper reviews recent progress towards understanding 3D shape perception made possible by appreciating the significant role that veridicality and complexity play in the natural visual environment. The ability to see objects as they really are "out there" is derived from the complexity inherent in the 3D object's shape. The importance of both veridicality and complexity was ignored in most prior research. Appreciating their importance made it possible to devise a computational model that recovers the 3D shape of an object from only one of its 2D images. This model uses a simplicity principle consisting of only four a priori constraints representing properties of 3D shapes, primarily their symmetry and volume. The model recovers 3D shapes from a single 2D image as well, and sometimes even better, than a human being. In the rare recoveries in which errors are observed, the errors made by the model and human subjects are very similar. The model makes no use of depth, surfaces or learning. Recent elaborations of this model include: (i) the recovery of the shapes of natural objects, including human and animal bodies with limbs in varying positions (ii) providing the model with two input images that allowed it to achieve virtually perfect shape constancy from almost all viewing directions. The review concludes with a comparison of some of the highlights of our novel, successful approach to the recovery of 3D shape from a 2D image with prior, less successful approaches. PMID:19800910

  15. 3D printing: technology and processing

    Kurinov, Ilya

    2016-01-01

    The objective of the research was to improve the process of 3D printing on the laboratory machine. In the study processes of designing, printing and post-print-ing treatment were improved. The study was commissioned by Mikko Ruotsalainen, head of the laboratory. The data was collected during the test work. All the basic information about 3D printing was taken from the Internet or library. As the results of the project higher model accuracy, solutions for post-printing treatment, printin...

  16. 3D monitor

    Szkandera, Jan

    2009-01-01

    Tato bakalářská práce se zabývá návrhem a realizací systému, který umožní obraz scény zobrazovaný na ploše vnímat prostorově. Prostorové vnímání 2D obrazové informace je umožněno jednak stereopromítáním a jednak tím, že se obraz mění v závislosti na poloze pozorovatele. Tato práce se zabývá hlavně druhým z těchto problémů. This Bachelor's thesis goal is to design and realize system, which allows user to perceive 2D visual information as three-dimensional. 3D visual preception of 2D image i...

  17. Stereo vision calibration procedure for 3D surface measurements

    Vilaça, João L.; Fonseca, Jaime C.; Pinho, A. C. Marques de

    2006-01-01

    In reverse engineering, rapid prototyping or quality control with complex 3D object surfaces, there is often the need to scan a complete 3D model using laser digitizers. Those systems usually use one camera and one laser,- using triangulation techniques; complex 3D objects can cause information gaps in the model obtained. To overcome this problem, another camera can be used. Traditional calibration procedures for those systems normally result in a full 3D camera calibration, involving indi...

  18. VIRTUAL 3D CITY MODELING: TECHNIQUES AND APPLICATIONS

    S. P. Singh; K. Jain; V. R. Mandla

    2013-01-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach ...

  19. X3D: Extensible 3D Graphics Standard

    Daly, Leonard; Brutzman, Don

    2007-01-01

    The article of record as published may be located at http://dx.doi.org/10.1109/MSP.2007.905889 Extensible 3D (X3D) is the open standard for Web-delivered three-dimensional (3D) graphics. It specifies a declarative geometry definition language, a run-time engine, and an application program interface (API) that provide an interactive, animated, real-time environment for 3D graphics. The X3D specification documents are freely available, the standard can be used without paying any royalties,...

  20. 3D game environments create professional 3D game worlds

    Ahearn, Luke

    2008-01-01

    The ultimate resource to help you create triple-A quality art for a variety of game worlds; 3D Game Environments offers detailed tutorials on creating 3D models, applying 2D art to 3D models, and clear concise advice on issues of efficiency and optimization for a 3D game engine. Using Photoshop and 3ds Max as his primary tools, Luke Ahearn explains how to create realistic textures from photo source and uses a variety of techniques to portray dynamic and believable game worlds.From a modern city to a steamy jungle, learn about the planning and technological considerations for 3D modelin

  1. X3d2pov. Traductor of X3D to POV-Ray

    Andrea Castellanos Mendoza

    2011-01-01

    Full Text Available High-quality and low-quality interactive graphics represent two different approaches to computer graphics’ 3D object representation. The former is mainly used to produce high computational cost movie animation. The latter is used for producing interactive scenes as part of virtual reality environments. Many file format specifications have appeared to satisfy underlying model needs; POV-ray (persistence of vision is an open source specification for rendering photorealistic images with the ray tracer algorithm and X3D (extendable 3D as the VRML successor standard for producing web virtual-reality environments written in XML. X3D2POV has been introduced to render high-quality images from an X3D scene specification; it is a grammar translator tool from X3D code to POV-ray code.

  2. 3D Reconstruction by Kinect Sensor:A Brief Review

    LI Shi-rui; TAO Ke-lu; WANG Si-yuan; LI Hai-yang; CAO Wei-guo; LI Hua

    2014-01-01

    While Kinect was originally designed as a motion sensing input device of the gaming console Microsoft Xbox 360 for gaming purposes, it’s easy-to-use, low-cost, reliability, speed of the depth measurement and relatively high quality of depth measurement make it can be used for 3D reconstruction. It could make 3D scanning technology more accessible to everyday users and turn 3D reconstruction models into much widely used asset for many applications. In this paper, we focus on Kinect 3D reconstruction.

  3. 3D Printing an Octohedron

    Aboufadel, Edward F.

    2014-01-01

    The purpose of this short paper is to describe a project to manufacture a regular octohedron on a 3D printer. We assume that the reader is familiar with the basics of 3D printing. In the project, we use fundamental ideas to calculate the vertices and faces of an octohedron. Then, we utilize the OPENSCAD program to create a virtual 3D model and an STereoLithography (.stl) file that can be used by a 3D printer.

  4. 3D modelling and recognition

    Rodrigues, Marcos; Robinson, Alan; Alboul, Lyuba; Brink, Willie

    2006-01-01

    3D face recognition is an open field. In this paper we present a method for 3D facial recognition based on Principal Components Analysis. The method uses a relatively large number of facial measurements and ratios and yields reliable recognition. We also highlight our approach to sensor development for fast 3D model acquisition and automatic facial feature extraction.

  5. 3D Human cartilage surface characterization by optical coherence tomography

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D

  6. Constraints on Moho Depth and Crustal Thickness in the Liguro-Provençal Basin from a 3d Gravity Inversion : Geodynamic Implications Contraintes sur la profondeur du moho et l'épaisseur crustale dans le bassin liguro-provençal à partir de l'inversion 3D de données gravimétriques : implications géodynamiques

    Gaulier J. M.

    2006-12-01

    Full Text Available 3D gravity modelling is combined with seismic refraction and reflection data to constrain a new Moho depth map in the Liguro-Provençal Basin (Western Mediterranean Sea. At seismically controlled points, the misfit between the gravimetric solution and the seismic data is about 2 km for a range of Moho depth between 12 km (deep basin and 30 km (mainlands. The oceanic crust thickness in the deep basin (5 km is smaller than the average oceanic crust thickness reported in open oceans (7 km, pointing to a potential mantle temperature 30°C to 50°C below normal and/or very slow oceanic spreading rate. Oceanic crust thickness is decreasing towards the Ligurian Sea and towards the continent-ocean boundary to values as small as 2 km. Poor magma supply is a result of low potential mantle temperature at depth, lateral thermal conduction towards unextended continental margin, and decrease of the oceanic spreading rate close to the pole of opening in the Ligurian Sea. Re-examination of magnetic data (paleomagnetic data and magnetic lineations indicates that opening of the Liguro-Provençal Basin may have ceased as late as Late Burdigalian (16. 5 Ma or even later. The absence of significant time gap between cessation of opening in the Liguro-Provençal Basin and rifting of the Tyrrhenian domain favours a continuous extension mechanism since Upper Oligocene driven by the African trench retreat. Ce rapport présente un travail commun avec le Laboratoire de géodynamique de l'École normale supérieure (ENS. Ce travail doit être resitué dans son contexte : l'étude régionale du golfe du Lion a été possible dans le cadre du projet européen Integrated Basin Studies. Le développement du code d'inversion 3D avait fait l'objet de conventions avec l'ENS pendant les années précédentes. La mise en Suvre d'une telle inversion est désormais possible à l'IFP. Il n'y a pas d'interface pour ce calculateur. L'aide des collègues de l'ENS est souhaitable pour la

  7. Evaluating methods for controlling depth perception in stereoscopic cinematography

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  8. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  9. 3-D contextual Bayesian classifiers

    Larsen, Rasmus

    distribution for the pixel values as well as a prior distribution for the configuration of class variables within the cross that is made of a pixel and its four nearest neighbours. We will extend these algorithms to 3-D, i.e. we will specify a simultaneous Gaussian distribution for a pixel and its 6 nearest 3......-D neighbours, and generalise the class variable configuration distributions within the 3-D cross given in 2-D algorithms. The new 3-D algorithms are tested on a synthetic 3-D multivariate dataset....

  10. Taming Supersymmetric Defects in 3d-3d Correspondence

    Gang, Dongmin; Romo, Mauricio; Yamazaki, Masahito

    2015-01-01

    We study knots in 3d Chern-Simons theory with complex gauge group $SL(N,\\mathbb{C})$, in the context of its relation with 3d $\\mathcal{N}=2$ theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d $(2,0)$ theory, which is compactified on a 3-manifold $\\hat{M}$. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d $SL(N,\\mathbb{C})$ Chern-Simons theory, in 3d $\\mathcal{N}=2$ theory, in 5d $\\mathcal{N}=2$ super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper, which contains more details and more results.

  11. Heat Equation to 3D Image Segmentation

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  12. Real time 3D scanner: investigations and results

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  13. 3DSEM: A 3D microscopy dataset.

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Holz, Jessica D; Owen, Heather A; Yu, Zeyun

    2016-03-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. PMID:26779561

  14. 3D-printed bioanalytical devices

    Bishop, Gregory W.; Satterwhite-Warden, Jennifer E.; Kadimisetty, Karteek; Rusling, James F.

    2016-07-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  15. Mobile 3D Viewer Supporting RFID System

    Kim, J. J.; Yang, S. W.; Choi, Y. [Chungang Univ., Seoul (Korea, Republic of)

    2007-07-01

    As hardware capabilities of mobile devices are being rapidly enhanced, applications based upon mobile devices are also being developed in wider areas. In this paper, a prototype mobile 3D viewer with the object identification through RFID system is presented. To visualize 3D engineering data such as CAD data, we need a process to compute triangulated data from boundary based surface like B-rep solid or trimmed surfaces. Since existing rendering engines on mobile devices do not provide triangulation capability, mobile 3D programs have focused only on an efficient handling with pre-tessellated geometry. We have developed a light and fast triangulation process based on constrained Delaunay triangulation suitable for mobile devices in the previous research. This triangulation software is used as a core for the mobile 3D viewer on a PDA with RFID system that may have potentially wide applications in many areas.

  16. Mobile 3D Viewer Supporting RFID System

    As hardware capabilities of mobile devices are being rapidly enhanced, applications based upon mobile devices are also being developed in wider areas. In this paper, a prototype mobile 3D viewer with the object identification through RFID system is presented. To visualize 3D engineering data such as CAD data, we need a process to compute triangulated data from boundary based surface like B-rep solid or trimmed surfaces. Since existing rendering engines on mobile devices do not provide triangulation capability, mobile 3D programs have focused only on an efficient handling with pre-tessellated geometry. We have developed a light and fast triangulation process based on constrained Delaunay triangulation suitable for mobile devices in the previous research. This triangulation software is used as a core for the mobile 3D viewer on a PDA with RFID system that may have potentially wide applications in many areas

  17. 3D-printed bioanalytical devices.

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-07-15

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  18. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    Cristina Losada

    2010-04-01

    Full Text Available This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space. The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  19. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  20. Deep Learning Representation using Autoencoder for 3D Shape Retrieval

    Zhu, Zhuotun; Wang, Xinggang; Bai, Song; Yao, Cong; Bai, Xiang

    2014-01-01

    We study the problem of how to build a deep learning representation for 3D shape. Deep learning has shown to be very effective in variety of visual applications, such as image classification and object detection. However, it has not been successfully applied to 3D shape recognition. This is because 3D shape has complex structure in 3D space and there are limited number of 3D shapes for feature learning. To address these problems, we project 3D shapes into 2D space and use autoencoder for feat...

  1. Infra Red 3D Computer Mouse

    Harbo, Anders La-Cour; Stoustrup, Jakob

    2000-01-01

    The infra red 3D mouse is a three dimensional input device to a computer. It works by determining the position of an arbitrary object (like a hand) by emitting infra red signals from a number of locations and measuring the reflected intensities. To maximize stability, robustness, and use of bandw......The infra red 3D mouse is a three dimensional input device to a computer. It works by determining the position of an arbitrary object (like a hand) by emitting infra red signals from a number of locations and measuring the reflected intensities. To maximize stability, robustness, and use...

  2. 3D camera tracking from disparity images

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  3. Surface 3-D reflection seismics - implementation at the Olkiluoto site

    Posiva Oy takes care of the final disposal of spent nuclear fuel in Finland. In year 2001 Olkiluoto was selected for the site of final disposal. Construction of the underground research facility, ONKALO, is going on at the Olkiluoto site. The aim of this work was to study the possibilities for surface 3-D seismics and to review experiences for design before field work. The physical parameters and geometric properties of the site, as well as efficient survey layout and source arrangements, were considered in this work. Reflection seismics is most used geophysical investigation method in oil exploration and earth studies in sedimentary environment. Recently method has also been applied in crystalline bedrock for ore exploration and nuclear waste disposal site investigations. The advantage of the method is high accuracy combined with large depth of investigation. The principles of seismic 2-D and 3-D soundings are well known and advanced. 3-D sounding is a straightforward expansion of 2-D line based surveying. In investigation of crystalline bedrock, the high frequency wave sources and receivers, their right use in measurements and careful processing procedure (refraction static corrections in particular) are important. Using the site parameters in 2-D numerical modeling, two cases of faulted thin layer at depths of 200, 400 and 600 meters were studied. The first case was a layer with vertical dislocation (a ramp) and the other a layer having limited width of dislocated part. Central frequencies were 100, 200, 400 and 700 Hz. Results indicate that 10 - 20 m dislocation is recognizable, but for depths greater than 600 m, over 20 meters is required. Width of the dislocated part will affect the detectability of vertical displacement. At depths of 200 m and 400 m 10 - 50 m wide parts appear as point-like scatterers, wider areas have more continuity. Dislocations larger than 20 m can be seen. From depth of 600 m over 100 m wide parts are discernible, narrower are visible

  4. 三维动画设计在大学生实践创新训练项目中的应用心得--以大孚膜废水深度处理工艺三维动画演示项目为例%Application Experience of 3D Animation Design in Colleague Students' Practice Innovation Training Project:Taking the 3D Animation Demonstration Design Project of“Dafu”Wastage Aster In-depth Purification Process for Example

    杨恒; 陈仲先

    2014-01-01

    大学生创新训练课题,为培养大学生的创新能力,全面提高学生综合素质提供了平台。本文结合自己作为三维动画专业教师指导大学生创新课题的实践体会,以大孚膜废水深度处理工艺三维动画演示项目为例,全面展示项目的研究目标、研究过程、研究成果、研究心得以及项目创新点与特色,希望能为今后大学生创新项目的不断完善和顺利实施提供有益的帮助。%Training Project of colleague student creatively practice has provided a platform where we can cultivate colleague students' creation ability and fully improve overall quality of student. This article took advantage of practical experience from a professional teacher who had guide and enlighten student on subject of creatively practice. By taking 3D animation demonstration design project of "Dafu"wastage aster in-depth purification process for example. We can illustrate the study target, procedure, achievement, experience and project creative achievement and characteristic, our intention is to facilitate further perfection and successful implementation on college students' creatively practice.

  5. Charge collection characterization of a 3D silicon radiation detector by using 3D simulations

    Kalliopuska, J; Orava, R

    2007-01-01

    In 3D detectors, the electrodes are processed within the bulk of the sensor material. Therefore, the signal charge is collected independently of the wafer thickness and the collection process is faster due to shorter distances between the charge collection electrodes as compared to a planar detector structure. In this paper, 3D simulations are used to assess the performance of a 3D detector structure in terms of charge sharing, efficiency and speed of charge collection, surface charge, location of the primary interaction and the bias voltage. The measured current pulse is proposed to be delayed due to the resistance–capacitance (RC) product induced by the variation of the serial resistance of the pixel electrode depending on the depth of the primary interaction. Extensive simulations are carried out to characterize the 3D detector structures and to verify the proposed explanation for the delay of the current pulse. A method for testing the hypothesis experimentally is suggested.

  6. Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes

    Niclass, Cristiano; Rochas, Alexis; Besse, Pierre-André; Charbon, Edoardo

    2005-01-01

    The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 × 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can b...

  7. STAR3D: a stack-based RNA 3D structural alignment tool.

    Ge, Ping; Zhang, Shaojie

    2015-11-16

    The various roles of versatile non-coding RNAs typically require the attainment of complex high-order structures. Therefore, comparing the 3D structures of RNA molecules can yield in-depth understanding of their functional conservation and evolutionary history. Recently, many powerful tools have been developed to align RNA 3D structures. Although some methods rely on both backbone conformations and base pairing interactions, none of them consider the entire hierarchical formation of the RNA secondary structure. One of the major issues is that directly applying the algorithms of matching 2D structures to the 3D coordinates is particularly time-consuming. In this article, we propose a novel RNA 3D structural alignment tool, STAR3D, to take into full account the 2D relations between stacks without the complicated comparison of secondary structures. First, the 3D conserved stacks in the inputs are identified and then combined into a tree-like consensus. Afterward, the loop regions are compared one-to-one in accordance with their relative positions in the consensus tree. The experimental results show that the prediction of STAR3D is more accurate for both non-homologous and homologous RNAs than other state-of-the-art tools with shorter running time. PMID:26184875

  8. Multiple footprint stereo algorithms for 3D display content generation

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  9. 3D Printing Functional Nanocomposites

    Leong, Yew Juan

    2016-01-01

    3D printing presents the ability of rapid prototyping and rapid manufacturing. Techniques such as stereolithography (SLA) and fused deposition molding (FDM) have been developed and utilized since the inception of 3D printing. In such techniques, polymers represent the most commonly used material for 3D printing due to material properties such as thermo plasticity as well as its ability to be polymerized from monomers. Polymer nanocomposites are polymers with nanomaterials composited into the ...

  10. PRODUCTION WITH 3D PRINTERS IN TEXTILES [REVIEW

    KESKIN Reyhan; GOCEK Ikilem

    2015-01-01

    3D printers are gaining more attention, finding different applications and 3D printing is being regarded as a ‘revolution’ of the 2010s for production. 3D printing is a production method that produces 3-dimensional objects by combining very thin layers over and over to form the object using 3D scanners or via softwares either private or open source. 3D printed materials find application in a large range of fields including aerospace, automotive, medicine and material science. There are severa...

  11. 3D Scanning With a Mobile Phone and Other Methods

    Eklund, Andreas

    2016-01-01

    The aim of this thesis was to use a mobile phone for 3D scanning using an application called 123D Catch. Other 3D scanning methods were used to compare different types of 3D scanning. Common 3D scanning methods available and their uses are presented in this work. A professional 3D scanner was used to get precise scan data on an object which was then used as reference for the lower tech methods. Scanning with a mobile phone means taking 2D photographs of an object from different angles. T...

  12. Are 3-D Movies Bad for Your Eyes?

    Full Text Available ... be concerned that 3-D movies, TV or video games will damage the eyes or visual system. Some people complain of headaches or motion sickness when viewing 3-D, which may indicate that the viewer has a problem with focusing or depth ... Thank you Your feedback has been sent.

  13. Are 3-D Movies Bad for Your Eyes?

    Full Text Available ... be concerned that 3-D movies, TV or video games will damage the eyes or visual system. Some people complain of headaches or motion sickness when viewing 3-D, which may indicate that the viewer has a problem with focusing or depth perception. Also, the techniques ...

  14. Binary pattern analysis for 3D facial action unit detection

    Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied, a

  15. 3D Elevation Program—Virtual USA in 3D

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  16. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  17. 3D acoustic imaging applied to the Baikal neutrino telescope

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  18. 3D acoustic imaging applied to the Baikal neutrino telescope

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  19. Electrospun 3D Fibrous Scaffolds for Chronic Wound Repair

    Huizhi Chen

    2016-04-01

    Full Text Available Chronic wounds are difficult to heal spontaneously largely due to the corrupted extracellular matrix (ECM where cell ingrowth is obstructed. Thus, the objective of this study was to develop a three-dimensional (3D biodegradable scaffold mimicking native ECM to replace the missing or dysfunctional ECM, which may be an essential strategy for wound healing. The 3D fibrous scaffolds of poly(lactic acid-co-glycolic acid (PLGA were successfully fabricated by liquid-collecting electrospinning, with 5~20 µm interconnected pores. Surface modification with the native ECM component aims at providing biological recognition for cell growth. Human dermal fibroblasts (HDFs successfully infiltrated into scaffolds at a depth of ~1400 µm after seven days of culturing, and showed significant progressive proliferation on scaffolds immobilized with collagen type I. In vivo models showed that chronic wounds treated with scaffolds had a faster healing rate. These results indicate that the 3D fibrous scaffolds may be a potential wound dressing for chronic wound repair.

  20. 3D for Graphic Designers

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani