WorldWideScience

Sample records for 3d object depth

  1. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    Science.gov (United States)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  2. Combining depth and gray images for fast 3D object recognition

    Science.gov (United States)

    Pan, Wang; Zhu, Feng; Hao, Yingming

    2016-10-01

    Reliable and stable visual perception systems are needed for humanoid robotic assistants to perform complex grasping and manipulation tasks. The recognition of the object and its precise 6D pose are required. This paper addresses the challenge of detecting and positioning a textureless known object, by estimating its complete 6D pose in cluttered scenes. A 3D perception system is proposed in this paper, which can robustly recognize CAD models in cluttered scenes for the purpose of grasping with a mobile manipulator. Our approach uses a powerful combination of two different camera technologies, Time-Of-Flight (TOF) and RGB, to segment the scene and extract objects. Combining the depth image and gray image to recognize instances of a 3D object in the world and estimate their 3D poses. The full pose estimation process is based on depth images segmentation and an efficient shape-based matching. At first, the depth image is used to separate the supporting plane of objects from the cluttered background. Thus, cluttered backgrounds are circumvented and the search space is extremely reduced. And a hierarchical model based on the geometry information of a priori CAD model of the object is generated in the offline stage. Then using the hierarchical model we perform a shape-based matching in 2D gray images. Finally, we validate the proposed method in a number of experiments. The results show that utilizing depth and gray images together can reach the demand of a time-critical application and reduce the error rate of object recognition significantly.

  3. Localization of Objects Using the Ms Windows Kinect 3D Optical Device with Utilization of the Depth Image Technology

    Directory of Open Access Journals (Sweden)

    Ján VACHÁLEK

    2015-11-01

    Full Text Available The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS. The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the indepth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our case, balls with fixed diameter were used as objects for 3D location.

  4. Localization of Objects Using the Ms Windows Kinect 3D Optical Device with Utilization of the Depth Image Technology

    OpenAIRE

    Ján VACHÁLEK; Marian GÉCI; Oliver ROVNÝ; Tomáš VOLENSKÝ

    2015-01-01

    The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS). The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the indepth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our ca...

  5. Real-time depth map manipulation for 3D visualization

    Science.gov (United States)

    Ideses, Ianir; Fishbain, Barak; Yaroslavsky, Leonid

    2009-02-01

    One of the key aspects of 3D visualization is computation of depth maps. Depth maps enables synthesis of 3D video from 2D video and use of multi-view displays. Depth maps can be acquired in several ways. One method is to measure the real 3D properties of the scene objects. Other methods rely on using two cameras and computing the correspondence for each pixel. Once a depth map is acquired for every frame, it can be used to construct its artificial stereo pair. There are many known methods for computing the optical flow between adjacent video frames. The drawback of these methods is that they require extensive computation power and are not very well suited to high quality real-time 3D rendering. One efficient method for computing depth maps is extraction of motion vector information from standard video encoders. In this paper we present methods to improve the 3D visualization quality acquired from compression CODECS by spatial/temporal and logical operations and manipulations. We show how an efficient real time implementation of spatial-temporal local order statistics such as median and local adaptive filtering in 3D-DCT domain can substantially improve the quality of depth maps and consequently 3D video while retaining real-time rendering. Real-time performance is achived by utilizing multi-core technology using standard parallelization algorithms and libraries (OpenMP, IPP).

  6. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  7. Advanced 3D Object Identification System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Optra will build an Advanced 3D Object Identification System utilizing three or more high resolution imagers spaced around a launch platform. Data from each imager...

  8. Monocular 3D display system for presenting correct depth

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  9. Algorithms for 3D shape scanning with a depth camera.

    Science.gov (United States)

    Cui, Yan; Schuon, Sebastian; Thrun, Sebastian; Stricker, Didier; Theobalt, Christian

    2013-05-01

    We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a Time-of-Flight (ToF) camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology, they bear potential for economical production in big volumes. Our easy-to-use, cost-effective scanning solution, which is based on such a sensor, could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensor's level of random noise is substantial and there is a nontrivial systematic bias. In this paper, we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.

  10. 3D-PRINTING OF BUILD OBJECTS

    Directory of Open Access Journals (Sweden)

    SAVYTSKYI M. V.

    2016-03-01

    Full Text Available Raising of problem. Today, in all spheres of our life we can constate the permanent search for new, modern methods and technologies that meet the principles of sustainable development. New approaches need to be, on the one hand more effective in terms of conservation of exhaustible resources of our planet, have minimal impact on the environment and on the other hand to ensure a higher quality of the final product. Construction is not exception. One of the new promising technology is the technology of 3D -printing of individual structures and buildings in general. 3Dprinting - is the process of real object recreating on the model of 3D. Unlike conventional printer which prints information on a sheet of paper, 3D-printer allows you to display three-dimensional information, i.e. creates certain physical objects. Currently, 3D-printer finds its application in many areas of production: machine building elements, a variety of layouts, interior elements, various items. But due to the fact that this technology is fairly new, it requires the creation of detailed and accurate technologies, efficient equipment and materials, and development of common vocabulary and regulatory framework in this field. Research Aim. The analysis of existing methods of creating physical objects using 3D-printing and the improvement of technology and equipment for the printing of buildings and structures. Conclusion. 3D-printers building is a new generation of equipment for the construction of buildings, structures, and structural elements. A variety of building printing technics opens up wide range of opportunities in the construction industry. At this stage, printers design allows to create low-rise buildings of different configurations with different mortars. The scientific novelty of this work is to develop proposals to improve the thermal insulation properties of constructed 3D-printing objects and technological equipment. The list of key terms and notions of construction

  11. Depth estimation from multiple coded apertures for 3D interaction

    Science.gov (United States)

    Suh, Sungjoo; Choi, Changkyu; Park, Dusik

    2013-09-01

    In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack of synthetically refocused images at various distances by using the shifting and averaging approach for the captured coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating refocused images at different depth levels, and refining the initial depth estimates.

  12. Efficient streaming of stereoscopic depth-based 3D videos

    Science.gov (United States)

    Temel, Dogancan; Aabed, Mohammed; Solh, Mashhour; AlRegib, Ghaassan

    2013-02-01

    In this paper, we propose a method to extract depth from motion, texture and intensity. We first analyze the depth map to extract a set of depth cues. Then, based on these depth cues, we process the colored reference video, using texture, motion, luminance and chrominance content, to extract the depth map. The processing of each channel in the YCRCB-color space is conducted separately. We tested this approach on different video sequences with different monocular properties. The results of our simulations show that the extracted depth maps generate a 3D video with quality close to the video rendered using the ground truth depth map. We report objective results using 3VQM and subjective analysis via comparison of rendered images. Furthermore, we analyze the savings in bitrate as a consequence of eliminating the need for two video codecs, one for the reference color video and one for the depth map. In this case, only the depth cues are sent as a side information to the color video.

  13. The Depth Map Construction from a 3D Point Cloud

    OpenAIRE

    Chmelar Pavel; Beran Ladislav; Rejfek Lubos

    2016-01-01

    A depth map transforms 3D points into a 2D image and gives a different view of an observed scene. This paper deals with a depth map construction. It describes the whole process, how to transform any 3D point cloud into a 2D depth map. The described method uses 3D rotation matrixes and the line equation. This process allows to create the desired view from arbitrary point and rotation in an exploration space. Using of a depth map allows to apply image processing methods on depth data to get add...

  14. Watermarking 3D Objects for Verification

    Science.gov (United States)

    1999-01-01

    signal ( audio /image/video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...Many view digital watermarking as a potential solution for copyright protection of valuable digital materials like CD-quality audio , publication...watermark. The object can be an image, an audio clip, a video clip, or a 3D model. Some papers discuss watermarking other forms of multime- dia data

  15. Binding 3-D object perception in the human visual cortex.

    Science.gov (United States)

    Jiang, Yang; Boehler, C N; Nönnig, Nina; Düzel, Emrah; Hopf, Jens-Max; Heinze, Hans-Jochen; Schoenfeld, Mircea Ariel

    2008-04-01

    How do visual luminance, shape, motion, and depth bind together in the brain to represent the coherent percept of a 3-D object within hundreds of milliseconds (msec)? We provide evidence from simultaneous magnetoencephalographic (MEG) and electroencephalographic (EEG) data that perception of 3-D objects defined by luminance or motion elicits sequential activity in human visual cortices within 500 msec. Following activation of the primary visual cortex around 100 msec, 3-D objects elicited sequential activity with only little overlap (dynamic 3-D shapes: MT-LO-Temp; stationary 3-D shapes: LO-Temp). A delay of 80 msec, both in MEG/EEG responses and in reaction times (RTs), was found when additional motion information was processed. We also found significant positive correlations between RT, and MEG and EEG responses in the right temporal location. After about 400 msec, long-lasting activity was observed in the parietal cortex and concurrently in previously activated regions. Novel time-frequency analyses indicate that the activity in the lateral occipital (LO) complex is associated with an increase of induced power in the gamma band, a hallmark of binding. The close correspondence of an induced gamma response with concurrent sources located in the LO in both experimental conditions at different points in time ( approximately 200 msec for luminance and approximately 300 msec for dynamic cues) strongly suggests that the LO is the key region for the assembly of object features. The assembly is fed forward to achieve coherent perception of a 3-D object within 500 msec.

  16. Parallel computing helps 3D depth imaging, processing

    Energy Technology Data Exchange (ETDEWEB)

    Nestvold, E. O. [IBM, Houston, TX (United States); Su, C. B. [IBM, Dallas, TX (United States); Black, J. L. [Landmark Graphics, Denver, CO (United States); Jack, I. G. [BP Exploration, London (United Kingdom)

    1996-10-28

    The significance of 3D seismic data in the petroleum industry during the past decade cannot be overstated. Having started as a technology too expensive to be utilized except by major oil companies, 3D technology is now routinely used by independent operators in the US and Canada. As with all emerging technologies, documentation of successes has been limited. There are some successes, however, that have been summarized in the literature in the recent past. Key technological developments contributing to this success have been major advances in RISC workstation technology, 3D depth imaging, and parallel computing. This article presents the basic concepts of parallel seismic computing, showing how it impacts both 3D depth imaging and more-conventional 3D seismic processing.

  17. Depth Map Calculation for Autostereoscopic 3D Display

    OpenAIRE

    IVANČÁK Peter; Hrozek, František

    2012-01-01

    Creation of content for 3D displays is veryactual problematic. This paper focus on thisproblematic and is divided into two parts. First partpresents various 3D displays and displayingtechnologies, especially stereoscopic displays – passive,active and autostereoscopic. Second part presentsapplication that calculates depth map fromstereoscopic image and was developed at DCI FEEITU of Košice (Department of computersand informatics, Faculty of electrical engineeringand informatics, Technical univ...

  18. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  19. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  20. 3D Objects Reconstruction from Image Data

    OpenAIRE

    Cír, Filip

    2008-01-01

    Tato práce se zabývá 3D rekonstrukcí z obrazových dat. Jsou popsány možnosti a přístupy k optickému skenování. Ruční optický 3D skener se skládá z kamery a zdroje čárového laseru, který je vzhledem ke kameře upevněn pod určitým úhlem. Je navržena vhodná podložka se značkami a je popsán algoritmus pro jejich real-time detekci. Po detekci značek lze vypočítat pozici a orientaci kamery. Na závěr je popsána detekce laseru a postup při výpočtu bodů na povrchu objektu pomocí triangulace. This pa...

  1. Object detection using categorised 3D edges

    DEFF Research Database (Denmark)

    Kiforenko, Lilita; Buch, Anders Glent; Bodenhagen, Leon

    2015-01-01

    is made possible by the explicit use of edge categories in the feature descriptor. We quantitatively compare our approach with the state-of-the-art template based Linemod method, which also provides an effective way of dealing with texture-less objects, tests were performed on our own object dataset. Our......In this paper we present an object detection method that uses edge categorisation in combination with a local multi-modal histogram descriptor, all based on RGB-D data. Our target application is robust detection and pose estimation of known objects. We propose to apply a recently introduced edge...... categorisation algorithm for describing objects in terms of its different edge types. Relying on edge information allow our system to deal with objects with little or no texture or surface variation. We show that edge categorisation improves matching performance due to the higher level of discrimination, which...

  2. Advanced 3D Object Identification System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — During the Phase I effort, OPTRA developed object detection, tracking, and identification algorithms and successfully tested these algorithms on computer-generated...

  3. 3D Face Hallucination from a Single Depth Frame.

    Science.gov (United States)

    Liang, Shu; Kemelmacher-Shlizerman, Ira; Shapiro, Linda G

    2014-12-01

    We present an algorithm that takes a single frame of a person's face from a depth camera, e.g., Kinect, and produces a high-resolution 3D mesh of the input face. We leverage a dataset of 3D face meshes of 1204 distinct individuals ranging from age 3 to 40, captured in a neutral expression. We divide the input depth frame into semantically significant regions (eyes, nose, mouth, cheeks) and search the database for the best matching shape per region. We further combine the input depth frame with the matched database shapes into a single mesh that results in a highresolution shape of the input person. Our system is fully automatic and uses only depth data for matching, making it invariant to imaging conditions. We evaluate our results using ground truth shapes, as well as compare to state-of-the-art shape estimation methods. We demonstrate the robustness of our local matching approach with high-quality reconstruction of faces that fall outside of the dataset span, e.g., faces older than 40 years old, facial expressions, and different ethnicities.

  4. 3D hand tracking using Kalman filter in depth space

    Science.gov (United States)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  5. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  6. Several Strategies on 3D Modeling of Manmade Objects

    Institute of Scientific and Technical Information of China (English)

    SHAO Zhenfeng; LI Deren; CHENG Qimin

    2004-01-01

    Several different strategies of 3D modeling are adopted for different kinds of manmade objects. Firstly, for those manmade objects with regular structure, if 2D information is available and elevation information can be obtained conveniently, then 3D modeling of them can be executed directly. Secondly, for those manmade objects with complicated structure comparatively and related stereo images pair can be acquired, in the light of topology-based 3D model we finish 3D modeling of them by integrating automatic and semi-automatic object extraction. Thirdly, for the most complicated objects whose geometrical information cannot be got from stereo images pair completely, we turn to topological 3D model based on CAD.

  7. 2D but not 3D: pictorial-depth deficits in a case of visual agnosia.

    Science.gov (United States)

    Turnbull, Oliver H; Driver, Jon; McCarthy, Rosaleen A

    2004-01-01

    Patients with visual agnosia exhibit acquired impairments in visual object recognition, that may or may not involve deficits in low-level perceptual abilities. Here we report a case (patient DM) who after head injury presented with object-recognition deficits. He still appears able to extract 2D information from the visual world in a relatively intact manner; but his ability to extract pictorial information about 3D object-structure is greatly compromised. His copying of line drawings is relatively good, and he is accurate and shows apparently normal mental rotation when matching or judging objects tilted in the picture-plane. But he performs poorly on a variety of tasks requiring 3D representations to be derived from 2D stimuli, including: performing mental rotation in depth, rather than in the picture-plane; judging the relative depth of two regions depicted in line-drawings of objects; and deciding whether a line-drawing represents an object that is 'impossible' in 3D. Interestingly, DM failed to show several visual illusions experienced by normals (Muller-Lyer and Ponzo), that some authors have attributed to pictorial depth cues. Taken together, these findings indicate a deficit in achieving 3D intepretations of objects from 2D pictorial cues, that may contribute to object-recognition problems in agnosia.

  8. DESIGN OF 3D TOPOLOGICAL DATA STRUCTURE FOR 3D CADASTRE OBJECTS

    Directory of Open Access Journals (Sweden)

    N. A. Zulkifli

    2016-09-01

    Full Text Available This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM specifications. Tetrahedral Network (TEN is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit. This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  9. Design of 3d Topological Data Structure for 3d Cadastre Objects

    Science.gov (United States)

    Zulkifli, N. A.; Rahman, A. Abdul; Hassan, M. I.

    2016-09-01

    This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM) specifications. Tetrahedral Network (TEN) is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit). This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  10. 3D imaging and wavefront sensing with a plenoptic objective

    Science.gov (United States)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  11. Identifying positioning-based attacks against 3D printed objects and the 3D printing process

    Science.gov (United States)

    Straub, Jeremy

    2017-05-01

    Zeltmann, et al. demonstrated that structural integrity and other quality damage to objects can be caused by changing its position on a 3D printer's build plate. On some printers, for example, object surfaces and support members may be stronger when oriented parallel to the X or Y axis. The challenge presented by the need to assure 3D printed object orientation is that this can be altered in numerous places throughout the system. This paper considers attack scenarios and discusses where attacks that change printing orientation can occur in the process. An imaging-based solution to combat this problem is presented.

  12. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  13. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  14. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem

    2015-04-16

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

  15. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    Science.gov (United States)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  16. Efficient and high speed depth-based 2D to 3D video conversion

    Science.gov (United States)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  17. Encryption of 3D Point Cloud Object with Deformed Fringe

    Directory of Open Access Journals (Sweden)

    Xin Yang

    2016-01-01

    Full Text Available A 3D point cloud object encryption method was proposed with this study. With the method, a mapping relationship between 3D coordinates was formulated and Z coordinate was transformed to deformed fringe by a phase coding method. The deformed fringe and gray image were used for encryption and decryption with simulated off-axis digital Fresnel hologram. Results indicated that the proposed method is able to accurately decrypt the coordinates and gray image of the 3D object. The method is also robust against occlusion attacks.

  18. 3D laser imaging for concealed object identification

    Science.gov (United States)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  19. Object Recognition Using a 3D RFID System

    OpenAIRE

    Roh, Se-gon; Choi, Hyouk Ryeol

    2009-01-01

    Up to now, object recognition in robotics has been typically done by vision, ultrasonic sensors, laser ranger finders etc. Recently, RFID has emerged as a promising technology that can strengthen object recognition. In this chapter, the 3D RFID system and the 3D tag were presented. The proposed RFID system can determine if an object as well as other tags exists, and also can estimate the orientation and position of the object. This feature considerably reduces the dependence of the robot on o...

  20. Embedding objects during 3D printing to add new functionalities.

    Science.gov (United States)

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  1. Integration of 3D structure from disparity into biological motion perception independent of depth awareness.

    Science.gov (United States)

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.

  2. Depth-color fusion strategy for 3-D scene modeling with Kinect.

    Science.gov (United States)

    Camplani, Massimo; Mantecon, Tomas; Salgado, Luis

    2013-12-01

    Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.

  3. A QUALITY ASSESSMENT METHOD FOR 3D ROAD POLYGON OBJECTS

    Directory of Open Access Journals (Sweden)

    L. Gao

    2015-08-01

    Full Text Available With the development of the economy, the fast and accurate extraction of the city road is significant for GIS data collection and update, remote sensing images interpretation, mapping and spatial database updating etc. 3D GIS has attracted more and more attentions from academics, industries and governments with the increase of requirements for interoperability and integration of different sources of data. The quality of 3D geographic objects is very important for spatial analysis and decision-making. This paper presents a method for the quality assessment of the 3D road polygon objects which is created by integrating 2D Road Polygon data with LiDAR point cloud and other height information such as Spot Height data in Hong Kong Island. The quality of the created 3D road polygon data set is evaluated by the vertical accuracy, geometric and attribute accuracy, connectivity error, undulation error and completeness error and the final results are presented.

  4. Vel-IO 3D: A tool for 3D velocity model construction, optimization and time-depth conversion in 3D geological modeling workflow

    Science.gov (United States)

    Maesano, Francesco E.; D'Ambrogi, Chiara

    2017-02-01

    We present Vel-IO 3D, a tool for 3D velocity model creation and time-depth conversion, as part of a workflow for 3D model building. The workflow addresses the management of large subsurface dataset, mainly seismic lines and well logs, and the construction of a 3D velocity model able to describe the variation of the velocity parameters related to strong facies and thickness variability and to high structural complexity. Although it is applicable in many geological contexts (e.g. foreland basins, large intermountain basins), it is particularly suitable in wide flat regions, where subsurface structures have no surface expression. The Vel-IO 3D tool is composed by three scripts, written in Python 2.7.11, that automate i) the 3D instantaneous velocity model building, ii) the velocity model optimization, iii) the time-depth conversion. They determine a 3D geological model that is consistent with the primary geological constraints (e.g. depth of the markers on wells). The proposed workflow and the Vel-IO 3D tool have been tested, during the EU funded Project GeoMol, by the construction of the 3D geological model of a flat region, 5700 km2 in area, located in the central part of the Po Plain. The final 3D model showed the efficiency of the workflow and Vel-IO 3D tool in the management of large amount of data both in time and depth domain. A 4 layer-cake velocity model has been applied to a several thousand (5000-13,000 m) thick succession, with 15 horizons from Triassic up to Pleistocene, complicated by a Mesozoic extensional tectonics and by buried thrusts related to Southern Alps and Northern Apennines.

  5. A recipe for consistent 3D management of velocity data and time-depth conversion using Vel-IO 3D

    Science.gov (United States)

    Maesano, Francesco E.; D'Ambrogi, Chiara

    2017-04-01

    3D geological model production and related basin analyses need large and consistent seismic dataset and hopefully well logs to support correlation and calibration; the workflow and tools used to manage and integrate different type of data control the soundness of the final 3D model. Even though seismic interpretation is a basic early step in such workflow, the most critical step to obtain a comprehensive 3D model useful for further analyses is represented by the construction of an effective 3D velocity model and a well constrained time-depth conversion. We present a complex workflow that includes comprehensive management of large seismic dataset and velocity data, the construction of a 3D instantaneous multilayer-cake velocity model, the time-depth conversion of highly heterogeneous geological framework, including both depositional and structural complexities. The core of the workflow is the construction of the 3D velocity model using Vel-IO 3D tool (Maesano and D'Ambrogi, 2017; https://github.com/framae80/Vel-IO3D) that is composed by the following three scripts, written in Python 2.7.11 under ArcGIS ArcPy environment: i) the 3D instantaneous velocity model builder creates a preliminary 3D instantaneous velocity model using key horizons in time domain and velocity data obtained from the analysis of well and pseudo-well logs. The script applies spatial interpolation to the velocity parameters and calculates the value of depth of each point on each horizon bounding the layer-cake velocity model. ii) the velocity model optimizer improves the consistency of the velocity model by adding new velocity data indirectly derived from measured depths, thus reducing the geometrical uncertainties in the areas located far from the original velocity data. iii) the time-depth converter runs the time-depth conversion of any object located inside the 3D velocity model The Vel-IO 3D tool allows one to create 3D geological models consistent with the primary geological constraints (e

  6. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert

    2016-01-01

    This paper presents a new large scale dataset targeting evaluation of local shape descriptors and 3d object recognition algorithms. The dataset consists of point clouds and triangulated meshes from 292 physical scenes taken from 11 different views; a total of approximately 3204 views. Each...... geometric groups; concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching...... performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat...

  7. Semantic 3D object maps for everyday robot manipulation

    CERN Document Server

    Rusu, Radu Bogdan

    2013-01-01

    The book written by Dr. Radu B. Rusu presents a detailed description of 3D Semantic Mapping in the context of mobile robot manipulation. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models that include the objects present in the world, together with their position, form, and other semantic aspects, as well as interpretations of these objects with respect to the robot tasks.   The book proposes novel 3D feature representations called Point Feature Histograms (PFH), as well as frameworks for the acquisition and processing of Semantic 3D Object Maps with contributions to robust registration, fast segmentation into regions, and reliable object detection, categorization, and reconstruction. These contributions have been fully implemented and empirically evaluated on different robotic systems, and have been the original kernel to the widely successful open-source project the Point Cloud Library (PCL) -- see http://poi...

  8. Automation of 3D micro object handling process

    DEFF Research Database (Denmark)

    Gegeckaite, Asta; Hansen, Hans Nørgaard

    2007-01-01

    Most of the micro objects in industrial production are handled with manual labour or in semiautomatic stations. Manual labour usually makes handling and assembly operations highly flexible, but slow, relatively imprecise and expensive. Handling of 3D micro objects poses special challenges due...... to the small absolute scale. In this article, the results of the pick-and-place operations of three different 3D micro objects were investigated. This study shows that depending on the correct gripping tool design as well as handling and assembly scenarios, a high success rate of up to 99% repeatability can...

  9. 3D Object Recognition Based on Linear Lie Algebra Model

    Institute of Scientific and Technical Information of China (English)

    LI Fang-xing; WU Ping-dong; SUN Hua-fei; PENG Lin-yu

    2009-01-01

    A surface model called the fibre bundle model and a 3D object model based on linear Lie algebra model are proposed.Then an algorithm of 3D object recognition using the linear Lie algebra models is presented.It is a convenient recognition method for the objects which are symmetric about some axis.By using the presented algorithm,the representation matrices of the fibre or the base curve from only finite points of the linear Lie algebra model can be obtained.At last some recognition results of practicalities are given.

  10. Depth-of-Focus Affects 3D Perception in Stereoscopic Displays.

    Science.gov (United States)

    Vienne, Cyril; Blondé, Laurent; Mamassian, Pascal

    2015-01-01

    Stereoscopic systems present binocular images on planar surface at a fixed distance. They induce cues to flatness, indicating that images are presented on a unique surface and specifying the relative depth of that surface. The center of interest of this study is on a second problem, arising when a 3D object distance differs from the display distance. As binocular disparity must be scaled using an estimate of viewing distance, object depth can thus be affected through disparity scaling. Two previous experiments revealed that stereoscopic displays can affect depth perception due to conflicting accommodation and vergence cues at near distances. In this study, depth perception is evaluated for farther accommodation and vergence distances using a commercially available 3D TV. In Experiment I, we evaluated depth perception of 3D stimuli at different vergence distances for a large pool of participants. We observed a strong effect of vergence distance that was bigger for younger than for older participants, suggesting that the effect of accommodation was reduced in participants with emerging presbyopia. In Experiment 2, we extended 3D estimations by varying both the accommodation and vergence distances. We also tested the hypothesis that setting accommodation open loop by constricting pupil size could decrease the contribution of focus cues to perceived distance. We found that the depth constancy was affected by accommodation and vergence distances and that the accommodation distance effect was reduced with a larger depth-of-focus. We discuss these results with regard to the effectiveness of focus cues as a distance signal. Overall, these results highlight the importance of appropriate focus cues in stereoscopic displays at intermediate viewing distances.

  11. Content-adaptive pyramid representation for 3D object classification

    DEFF Research Database (Denmark)

    Kounalakis, Tsampikos; Boulgouris, Nikolaos; Triantafyllidis, Georgios

    2016-01-01

    In this paper we introduce a novel representation for the classification of 3D images. Unlike most current approaches, our representation is not based on a fixed pyramid but adapts to image content and uses image regions instead of rectangular pyramid scales. Image characteristics, such as depth ...... and color, are used for defining regions within images. Multiple region scales are formed in order to construct the proposed pyramid image representation. The proposed method achieves excellent results in comparison to conventional representations....

  12. 3-D Object Recognition from Point Cloud Data

    Science.gov (United States)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  13. Object 3D surface reconstruction approach using portable laser scanner

    Science.gov (United States)

    Xu, Ning; Zhang, Wei; Zhu, Liye; Li, Changqing; Wang, Shifeng

    2017-06-01

    The environment perception plays the key role for a robot system. The 3D surface of the objects can provide essential information for the robot to recognize objects. This paper present an approach to reconstruct objects' 3D surfaces using a portable laser scanner we designed which consists of a single-layer laser scanner, an encoder, a motor, power supply and mechanical components. The captured point cloud data is processed to remove the discrete points, denoise filtering, stitching and registration. Then the triangular mesh generation of point cloud is accomplished by using Gaussian bilateral filtering, ICP real-time registration and greedy triangle projection algorithm. The experiment result shows the feasibility of the device designed and the algorithm proposed.

  14. Augmented Reality vs Virtual Reality for 3D Object Manipulation.

    Science.gov (United States)

    Krichenbauer, Max; Yamamoto, Goshiro; Taketomi, Takafumi; Sandor, Christian; Kato, Hirokazu

    2017-01-25

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5% on average compared to AR (p < 0:024). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3% slower in VR than in AR (p < 0:04). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  15. Laser embedding electronics on 3D printed objects

    Science.gov (United States)

    Kirleis, Matthew A.; Simonson, Duane; Charipar, Nicholas A.; Kim, Heungsoo; Charipar, Kristin M.; Auyeung, Ray C. Y.; Mathews, Scott A.; Piqué, Alberto

    2014-03-01

    Additive manufacturing techniques such as 3D printing are able to generate reproductions of a part in free space without the use of molds; however, the objects produced lack electrical functionality from an applications perspective. At the same time, techniques such as inkjet and laser direct-write (LDW) can be used to print electronic components and connections onto already existing objects, but are not capable of generating a full object on their own. The approach missing to date is the combination of 3D printing processes with direct-write of electronic circuits. Among the numerous direct write techniques available, LDW offers unique advantages and capabilities given its compatibility with a wide range of materials, surface chemistries and surface morphologies. The Naval Research Laboratory (NRL) has developed various LDW processes ranging from the non-phase transformative direct printing of complex suspensions or inks to lase-and-place for embedding entire semiconductor devices. These processes have been demonstrated in digital manufacturing of a wide variety of microelectronic elements ranging from circuit components such as electrical interconnects and passives to antennas, sensors, actuators and power sources. At NRL we are investigating the combination of LDW with 3D printing to demonstrate the digital fabrication of functional parts, such as 3D circuits. Merging these techniques will make possible the development of a new generation of structures capable of detecting, processing, communicating and interacting with their surroundings in ways never imagined before. This paper shows the latest results achieved at NRL in this area, describing the various approaches developed for generating 3D printed electronics with LDW.

  16. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    Science.gov (United States)

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  17. EFFICIENT IMPLEMENTATION OF 3D FILTER FOR MOVING OBJECT EXTRACTION

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper the design and implementation of Multi-Dimensional (MD) filter, particularly 3-Dimensional (3D) filter, are presented. Digital (discrete domain) filters applied to image and video signal processing using the novel 3D multirate algorithms for efficient implementation of moving object extraction are engineered with an example. The multirate (decimation and/or interpolation) signal processing algorithms can achieve significant savings in computation and memory usage. The proposed algorithm uses the mapping relations of z-transfer functions between non-multirate and multirate mathematical expressions in terms of time-varying coefficient instead of traditional polyphase decomposition counterparts. The mapping properties can be readily used to efficiently analyze and synthesize MD multirate filters.

  18. Exploring local regularities for 3D object recognition

    Science.gov (United States)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  19. Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map

    Directory of Open Access Journals (Sweden)

    Sen Han

    2016-06-01

    Full Text Available Multiple description (MD coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.

  20. Automatic 2D-to-3D video conversion by monocular depth cues fusion and utilizing human face landmarks

    Science.gov (United States)

    Fard, Mani B.; Bayazit, Ulug

    2013-12-01

    In this paper, we propose a hybrid 2D-to-3D video conversion system to recover the 3D structure of the scene. Depending on the scene characteristics, geometric or height depth information is adopted to form the initial depth map. This depth map is fused with color-based depth cues to construct the nal depth map of the scene background. The depths of the foreground objects are estimated after their classi cation into human and non-human regions. Speci cally, the depth of a non-human foreground object is directly calculated from the depth of the region behind it in the background. To acquire more accurate depth for the regions containing a human, the estimation of the distance between face landmarks is also taken into account. Finally, the computed depth information of the foreground regions is superimposed on the background depth map to generate the complete depth map of the scene which is the main goal in the process of converting 2D video to 3D.

  1. Lagragian 3D tracking of fluorescent microscopic objects under flow

    CERN Document Server

    Darnige, T; Bohec, P; Lindner, A; Clément, E

    2016-01-01

    We detail the elaboration of a tracking device mounted on an epifluorescent inverted microscope and suited to obtain time resolved 3D Lagrangian tracks of fluorescent micro-objects. The system is based on a real-time image processing driving a mechanical X-Y stage displacement and a Z refocusing piezo mover such as to keep the designed object at a fixed position in a moving frame. Track coordinates with respect to the microfluidic device, as well as images of the object in the laboratory reference frame are thus obtained at a frequency of several tenths of Hertz. This device is particularly adapted to follow the trajectory of motile micro-organisms in microfluidic devices with or without flow.

  2. Divided attention limits perception of 3-D object shapes.

    Science.gov (United States)

    Scharff, Alec; Palmer, John; Moore, Cathleen M

    2013-01-01

    Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes.

  3. 3D Reconstruction of End-Effector in Autonomous Positioning Process Using Depth Imaging Device

    Directory of Open Access Journals (Sweden)

    Yanzhu Hu

    2016-01-01

    Full Text Available The real-time calculation of positioning error, error correction, and state analysis has always been a difficult challenge in the process of manipulator autonomous positioning. In order to solve this problem, a simple depth imaging equipment (Kinect is used and Kalman filtering method based on three-frame subtraction to capture the end-effector motion is proposed in this paper. Moreover, backpropagation (BP neural network is adopted to recognize the target. At the same time, batch point cloud model is proposed in accordance with depth video stream to calculate the space coordinates of the end-effector and the target. Then, a 3D surface is fitted by using the radial basis function (RBF and the morphology. The experiments have demonstrated that the end-effector positioning error can be corrected in a short time. The prediction accuracies of both position and velocity have reached 99% and recognition rate of 99.8% has been achieved for cylindrical object. Furthermore, the gradual convergence of the end-effector center (EEC to the target center (TC shows that the autonomous positioning is successful. Simultaneously, 3D reconstruction is also completed to analyze the positioning state. Hence, the proposed algorithm in this paper is competent for autonomous positioning of manipulator. The algorithm effectiveness is also validated by 3D reconstruction. The computational ability is increased and system efficiency is greatly improved.

  4. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  5. Large distance 3D imaging of hidden objects

    Science.gov (United States)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  6. A new method to create depth information based on lighting analysis for 2D/3D conversion

    Institute of Scientific and Technical Information of China (English)

    Hyunho; Han; Gangseong; Lee; Jongyong; Lee; Jinsoo; Kim; Sanghun; Lee

    2013-01-01

    A new method creating depth information for 2D/3D conversion was proposed. The distance between objects is determined by the distances between objects and light source position which is estimated by the analysis of the image. The estimated lighting value is used to normalize the image. A threshold value is determined by some weighted operation between the original image and the normalized image. By applying the threshold value to the original image, background area is removed. Depth information of interested area is calculated from the lighting changes. The final 3D images converted with the proposed method are used to verify its effectiveness.

  7. Additive manufacturing. Continuous liquid interface production of 3D objects.

    Science.gov (United States)

    Tumbleston, John R; Shirvanyants, David; Ermoshkin, Nikita; Janusziewicz, Rima; Johnson, Ashley R; Kelly, David; Chen, Kai; Pinschmidt, Robert; Rolland, Jason P; Ermoshkin, Alexander; Samulski, Edward T; DeSimone, Joseph M

    2015-03-20

    Additive manufacturing processes such as 3D printing use time-consuming, stepwise layer-by-layer approaches to object fabrication. We demonstrate the continuous generation of monolithic polymeric parts up to tens of centimeters in size with feature resolution below 100 micrometers. Continuous liquid interface production is achieved with an oxygen-permeable window below the ultraviolet image projection plane, which creates a "dead zone" (persistent liquid interface) where photopolymerization is inhibited between the window and the polymerizing part. We delineate critical control parameters and show that complex solid parts can be drawn out of the resin at rates of hundreds of millimeters per hour. These print speeds allow parts to be produced in minutes instead of hours.

  8. Optical 3D sensor for large objects in industrial application

    Science.gov (United States)

    Kuhmstedt, Peter; Heinze, Matthias; Himmelreich, Michael; Brauer-Burchardt, Christian; Brakhage, Peter; Notni, Gunther

    2005-06-01

    A new self calibrating optical 3D measurement system using fringe projection technique named "kolibri 1500" is presented. It can be utilised to acquire the all around shape of large objects. The basic measuring principle is the phasogrammetric approach introduced by the authors /1, 2/. The "kolibri 1500" consists of a stationary system with a translation unit for handling of objects. Automatic whole body measurement is achieved by using sensor head rotation and changeable object position, which can be done completely computer controlled. Multi-view measurement is realised by using the concept of virtual reference points. In this way no matching procedures or markers are necessary for the registration of the different images. This makes the system very flexible to realise different measurement tasks. Furthermore, due to self calibrating principle mechanical alterations are compensated. Typical parameters of the system are: the measurement volume extends from 400 mm up to 1500 mm max. length, the measurement time is between 2 min for 12 images up to 20 min for 36 images and the measurement accuracy is below 50μm.The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  9. Robust 3D Object Tracking from Monocular Images using Stable Parts.

    Science.gov (United States)

    Crivellaro, Alberto; Rad, Mahdi; Verdie, Yannick; Yi, Kwang Moo; Fua, Pascal; Lepetit, Vincent

    2017-05-26

    We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.

  10. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  11. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  12. Lossy to lossless object-based coding of 3-D MRI data.

    Science.gov (United States)

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  13. Modeling 3D Objects for Navigation Purposes Using Laser Scanning

    Directory of Open Access Journals (Sweden)

    Cezary Specht

    2016-07-01

    Full Text Available The paper discusses the creation of 3d models and their applications in navigation. It contains a review of available methods and geometric data sources, focusing mostly on terrestrial laser scanning. It presents detailed description, from field survey to numerical elaboration, how to construct accurate model of a typical few storey building as a hypothetical reference in complex building navigation. Hence, the paper presents fields where 3d models are being used and their potential new applications.

  14. A Prototypical 3D Graphical Visualizer for Object-Oriented Systems

    Institute of Scientific and Technical Information of China (English)

    1996-01-01

    is paper describes a framework for visualizing object-oriented systems within a 3D interactive environment.The 3D visualizer represents the structure of a program as Cylinder Net that simultaneously specifies two relationships between objects within 3D virtual space.Additionally,it represents additional relationships on demand when objects are moved into local focus.The 3D visualizer is implemented using a 3D graphics toolkit,TOAST,that implements 3D Widgets 3D graphics to ease the programming task for 3D visualization.

  15. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    Science.gov (United States)

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  16. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  17. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  18. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Science.gov (United States)

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  19. 3D HYBRID DEPTH MIGRATION AND FOUR-WAY SPLITTING SCHEMES

    Institute of Scientific and Technical Information of China (English)

    Wen-sheng Zhang; Guan-quan Zhang

    2006-01-01

    The alternately directional implicit (ADI) scheme is usually used in 3D depth migration.It splits the 3D square-root operator along crossline and inline directions alternately. In this paper, based on the ideal of data line, the four-way splitting schemes and their splitting errors for the finite-difference (FD) method and the hybrid method are investigated. The wavefield extrapolation of four-way splitting scheme is accomplished on a data line and is stable unconditionally. Numerical analysis of splitting errors show that the two-way FD migration have visible numerical anisotropic errors, and that four-way FD migration has much less splitting errors than two-way FD migration has. For the hybrid method, the differences of numerical anisotropic errors between two-way scheme and four-way scheme are small in the case of lower lateral velocity variations. The schemes presented in this paper can be used in 3D post-stack or prestack depth migration. Two numerical calculations of 3D depth migration are completed. One is the four-way FD and hybrid 3D post-stack depth migration for an impulse response, which shows that the anisotropic errors can be eliminated effectively in the cases of constant and variable velocity variations. The other is the 3D shot-profile prestack depth migration for SEG/EAEG benchmark model with twoway hybrid splitting scheme, which presents good imaging results. The Message Passing Interface (MPI) programme based on shot number is adopted.

  20. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    Science.gov (United States)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  1. Depth of field simulation for still digital images using a 3D camera

    Directory of Open Access Journals (Sweden)

    Omar Alejandro Rodríguez Rosas

    2016-11-01

    Full Text Available In a world where digital photography is almost ubiquitous, the size of image capturing devices and their lenses limit their capabilities to achieve shallower depths of field for aesthetic purposes. This work proposes a novel approach to simulate this effect using the color and depth images from a 3D camera. Comparative tests yielded results similar to those of a regular lens.

  2. Depth and Intensity Gabor Features Based 3D Face Recognition Using Symbolic LDA and AdaBoost

    Directory of Open Access Journals (Sweden)

    P. S. Hiremath

    2013-11-01

    Full Text Available In this paper, the objective is to investigate what contributions depth and intensity information make to the solution of face recognition problem when expression and pose variations are taken into account, and a novel system is proposed for combining depth and intensity information in order to improve face recognition performance. In the proposed approach, local features based on Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in symbolic linear discriminant analysis (Symbolic LDA with AdaBoost learning is proposed to select the most effective and robust features and to construct a strong classifier. Experiments are performed on the three datasets, namely, Texas 3D face database, Bhosphorus 3D face database and CASIA 3D face database, which contain face images with complex variations, including expressions, poses and longtime lapses between two scans. The experimental results demonstrate the enhanced effectiveness in the performance of the proposed method. Since most of the design processes are performed automatically, the proposed approach leads to a potential prototype design of an automatic face recognition system based on the combination of the depth and intensity information in face images.

  3. Toward Simultaneous Visual Comfort and Depth Sensation Optimization for Stereoscopic 3-D Experience.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Li, Zhutuan; Jiang, Gangyi; Dai, Qionghai

    2016-10-20

    Visual comfort and depth sensation are two important incongruent counterparts in determining the overall stereoscopic 3-D experience. In this paper, we proposed a novel simultaneous visual comfort and depth sensation optimization approach for stereoscopic images. The main motivation of the proposed optimization approach is to enhance the overall stereoscopic 3-D experience. Toward this end, we propose a two-stage solution to address the optimization problem. In the first layer-independent disparity adjustment process, we iteratively adjust the disparity range of each depth layer to satisfy with visual comfort and depth sensation constraints simultaneously. In the following layer-dependent disparity process, disparity adjustment is implemented based on a defined total energy function built with intra-layer data, inter-layer data and just noticeable depth difference terms. Experimental results on perceptually uncomfortable and comfortable stereoscopic images demonstrate that in comparison with the existing methods, the proposed method can achieve a reasonable performance balance between visual comfort and depth sensation, leading to promising overall stereoscopic 3-D experience.

  4. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    DEFF Research Database (Denmark)

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent

    2014-01-01

    Object detection, recognition and pose estimation in 3D images have gained momentum due to availability of 3D sensors (RGB-D) and increase of large scale 3D data, such as city maps. The most popular approach is to extract and match 3D shape descriptors that encode local scene structure, but omits...

  5. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    Science.gov (United States)

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  6. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    Science.gov (United States)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  7. Depth map resolution enhancement for 2D/3D imaging system via compressive sensing

    Science.gov (United States)

    Han, Juanjuan; Loffeld, Otmar; Hartmann, Klaus

    2011-08-01

    This paper introduces a novel approach for post-processing of depth map which enhances the depth map resolution in order to achieve visually pleasing 3D models from a new monocular 2D/3D imaging system consists of a Photonic mixer device (PMD) range camera and a standard color camera. The proposed method adopts the revolutionary inversion theory framework called Compressive Sensing (CS). The depth map of low resolution is considered as the result of applying blurring and down-sampling techniques to that of high-resolution. Based on the underlying assumption that the high-resolution depth map is compressible in frequency domain and recent theoretical work on CS, the high-resolution version can be estimated and furthermore reconstructed via solving non-linear optimization problem. And therefore the improved depth map reconstruction provides a useful help to build an improved 3D model of a scene. The experimental results on the real data are presented. In the meanwhile the proposed scheme opens new possibilities to apply CS to a multitude of potential applications on various multimodal data analysis and processing.

  8. Affordance-based 3D feature for generic object recognition

    Science.gov (United States)

    Iizuka, M.; Akizuki, S.; Hashimoto, M.

    2017-03-01

    Techniques for generic object recognition, which targets everyday objects such as cups and spoons, and techniques for approach vector estimation (e.g. estimating grasp position), which are needed for carrying out tasks involving everyday objects, are considered necessary for the perceptual system of service robots. In this research, we design feature for generic object recognition so they can also be applied to approach vector estimation. To carry out tasks involving everyday objects, estimating the function of the target object is critical. Also, as the function of holding liquid is found in all cups, so a function is shared in each type (class) of everyday objects. We thus propose a generic object recognition method that can estimate the approach vector by expressing an object's function as feature. In a test of the generic object recognition of everyday objects, we confirmed that our proposed method had a 92% recognition rate. This rate was 11% higher than the mainstream generic object recognition technique of using convolutional neural network (CNN).

  9. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Directory of Open Access Journals (Sweden)

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  10. Depth cues versus the simplicity principle in 3D shape perception.

    Science.gov (United States)

    Li, Yunfeng; Pizlo, Zygmunt

    2011-10-01

    Two experiments were performed to explore the mechanisms of human 3D shape perception. In Experiment 1, the subjects' performance in a shape constancy task in the presence of several cues (edges, binocular disparity, shading and texture) was tested. The results show that edges and binocular disparity, but not shading or texture, are important in 3D shape perception. Experiment 2 tested the effect of several simplicity constraints, such as symmetry and planarity on subjects' performance in a shape constancy task. The 3D shapes were represented by edges or vertices only. The results show that performance with or without binocular disparity is at chance level, unless the 3D shape is symmetric and/or its faces are planar. In both experiments, there was a correlation between the subjects' performance with and without binocular disparity. Our study suggests that simplicity constraints, not depth cues, play the primary role in both monocular and binocular 3D shape perception. These results are consistent with our computational model of 3D shape recovery. Copyright © 2011 Cognitive Science Society, Inc.

  11. Spectral transform approaches of 3D coordinates for object classification

    OpenAIRE

    Semenov, N.; Leontiev, A.

    2008-01-01

    This article describes one of the methods to process the data for subsequent classification spectral processing of the three dimensional data. This processing allows, using minimal amount of computation, to transfer the object's coordinates to the starting point, as well as to turn the object around any axis and normalize its size.

  12. 3D object detection from roadside data using laser scanners

    Science.gov (United States)

    Tang, Jimmy; Zakhor, Avideh

    2011-03-01

    The detection of objects on a given road path by vehicles equipped with range measurement devices is important to many civilian and military applications such as obstacle avoidance in autonomous navigation systems. In this thesis, we develop a method to detect objects of a specific size lying on a road using an acquisition vehicle equipped with forward looking Light Detection And Range (LiDAR) sensors and inertial navigation system. We use GPS data to accurately place the LiDAR points in a world map, extract point cloud clusters protruding from the road, and detect objects of interest using weighted random forest trees. We show that our proposed method is effective in identifying objects for several road datasets collected with various object locations and vehicle speeds.

  13. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    Science.gov (United States)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  14. Estimation of foot pressure from human footprint depths using 3D scanner

    Science.gov (United States)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  15. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  16. Accurate 3D maps from depth images and motion sensors via nonlinear Kalman filtering

    CERN Document Server

    Hervier, Thibault; Goulette, François

    2012-01-01

    This paper investigates the use of depth images as localisation sensors for 3D map building. The localisation information is derived from the 3D data thanks to the ICP (Iterative Closest Point) algorithm. The covariance of the ICP, and thus of the localization error, is analysed, and described by a Fisher Information Matrix. It is advocated this error can be much reduced if the data is fused with measurements from other motion sensors, or even with prior knowledge on the motion. The data fusion is performed by a recently introduced specific extended Kalman filter, the so-called Invariant EKF, and is directly based on the estimated covariance of the ICP. The resulting filter is very natural, and is proved to possess strong properties. Experiments with a Kinect sensor and a three-axis gyroscope prove clear improvement in the accuracy of the localization, and thus in the accuracy of the built 3D map.

  17. A robotic assembly procedure using 3D object reconstruction

    DEFF Research Database (Denmark)

    Chrysostomou, Dimitrios; Bitzidou, Malamati; Gasteratos, Antonios

    The use of robotic systems for rapid manufacturing and intelligent automation has attracted growing interest in recent years. Specifically, the generation and planning of an object assembly sequence is becoming crucial as it can reduce significantly the production costs and accelerate the full......-scale product delivery. This work lies within the category of intelligent assembly path planning methods and an object assembly sequence is planned to incorporate the production of an object’s volumetric model by a multi-camera system, its three-dimensional representation with octrees and its construction...... implemented by a 5 d.o.f. robot arm and a gripper. The final goal is to plan a path for the robot arm, consisting of predetermined paths and motions for the automatic assembly of ordinary objects....

  18. Extraction of depth information for 3D imaging using pixel aperture technique

    Science.gov (United States)

    Choi, Byoung-Soo; Bae, Myunghan; Kim, Sang-Hwan; Lee, Jimin; Oh, Chang-Woo; Chang, Seunghyuk; Park, JongHo; Lee, Sang-Jin; Shin, Jang-Kyoo

    2017-02-01

    A 3dimensional (3D) imaging is an important area which can be applied to face detection, gesture recognition, and 3D reconstruction. In this paper, extraction of depth information for 3D imaging using pixel aperture technique is presented. An active pixel sensor (APS) with in-pixel aperture has been developed for this purpose. In the conventional camera systems using a complementary metal-oxide-semiconductor (CMOS) image sensor, an aperture is located behind the camera lens. However, in our proposed camera system, the aperture implemented by metal layer of CMOS process is located on the White (W) pixel which means a pixel without any color filter on top of the pixel. 4 types of pixels including Red (R), Green (G), Blue (B), and White (W) pixels were used for pixel aperture technique. The RGB pixels produce a defocused image with blur, while W pixels produce a focused image. The focused image is used as a reference image to extract the depth information for 3D imaging. This image can be compared with the defocused image from RGB pixels. Therefore, depth information can be extracted by comparing defocused image with focused image using the depth from defocus (DFD) method. Size of the pixel for 4-tr APS is 2.8 μm × 2.8 μm and the pixel structure was designed and simulated based on 0.11 μm CMOS image sensor (CIS) process. Optical performances of the pixel aperture technique were evaluated using optical simulation with finite-difference time-domain (FDTD) method and electrical performances were evaluated using TCAD.

  19. Depth Acuity Methodology for Electronic 3D Displays: eJames (eJ)

    Science.gov (United States)

    2016-07-01

    chromaticity . A similar measure of depth needs to be matured for electronic 3D displays. 77 DISTRIBUTION STATEMENT A. Approved for public...5,684,621. (Fidopiastis et al., 2010) Fidopiastis, C., Rizzo, A., and Rolland, J., “User-Centered Virtual Environment Design for Virtual...and Size Perception in Virtual Environments .” Presence 4, 24-49 (1995). (Scarfe and Hibbard, 2006) Scarfe, P. and Hibbard, P., “Disparity

  20. Structured light 3D depth map enhancement and gesture recognition using image content adaptive filtering

    Science.gov (United States)

    Ramachandra, Vikas; Nash, James; Atanassov, Kalin; Goma, Sergio

    2013-03-01

    A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector that projects an illumination pattern on the scene (e.g. mask with vertical stripes) and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. In this paper, we use side information in the form of image structure to enhance the depth map. This side information is obtained from the received light pattern image reflected by the scene itself. The processing steps run real time. This post-processing stage in the form of depth map enhancement can be used for better hand gesture recognition, as is illustrated in this paper.

  1. Precision depth measurement of through silicon vias (TSVs) on 3D semiconductor packaging process.

    Science.gov (United States)

    Jin, Jonghan; Kim, Jae Wan; Kang, Chu-Shik; Kim, Jong-Ahn; Lee, Sunghun

    2012-02-27

    We have proposed and demonstrated a novel method to measure depths of through silicon vias (TSVs) at high speed. TSVs are fine and deep holes fabricated in silicon wafers for 3D semiconductors; they are used for electrical connections between vertically stacked wafers. Because the high-aspect ratio hole of the TSV makes it difficult for light to reach the bottom surface, conventional optical methods using visible lights cannot determine the depth value. By adopting an optical comb of a femtosecond pulse laser in the infra-red range as a light source, the depths of TSVs having aspect ratio of about 7 were measured. This measurement was done at high speed based on spectral resolved interferometry. The proposed method is expected to be an alternative method for depth inspection of TSVs.

  2. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    Directory of Open Access Journals (Sweden)

    Dennis Edler

    Full Text Available Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems or additional artificial layers (coordinate grids, provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids and content-related, irregular line features (i.e. highways and main streets in official urban topographic maps (scale 1/10,000 further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate and the mean distances of correctly recalled objects (spatial accuracy. It is shown that the True-3D accentuating of grids (depth offset: 5 cm significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  3. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    Science.gov (United States)

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  4. ROI-preserving 3D video compression method utilizing depth information

    Science.gov (United States)

    Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan

    2015-09-01

    Efficiently transmitting the extra information of three dimensional (3D) video is becoming a key issue of the development of 3DTV. 2D plus depth format not only occupies the smaller bandwidth and is compatible transmission under the condition of the existing channel, but also can provide technique support for advanced 3D video compression in some extend. This paper proposes an ROI-preserving compression scheme to further improve the visual quality at a limited bit rate. According to the connection between the focus of Human Visual System (HVS) and depth information, region of interest (ROI) can be automatically selected via depth map progressing. The main improvement from common method is that a meanshift based segmentation is executed to the depth map before foreground ROI selection to keep the integrity of scene. Besides, the sensitive areas along the edges are also protected. The Spatio-temporal filtering adapting to H.264 is used to the non-ROI of both 2D video and depth map before compression. Experiments indicate that, the ROI extracted by this method is more undamaged and according with subjective feeling, and the proposed method can keep the key high-frequency information more effectively while the bit rate is reduced.

  5. RECONSTRUCCIÓN DE OBJETO 3D A PARTIR DE IMÁGENES CALIBRADAS 3D OBJECT RECONSTRUCTION WITH CALIBRATED IMAGES

    Directory of Open Access Journals (Sweden)

    Natividad Grandón-Pastén

    2007-08-01

    Full Text Available Este trabajo presenta el desarrollo de un sistema de reconstrucción de objeto 3D, a partir de una colección de vistas. El sistema se compone de dos módulos principales. El primero realiza el procesamiento de imagen, cuyo objetivo es determinar el mapa de profundidad en un par de vistas, donde cada par de vistas sucesivas sigue una secuencia de fases: detección de puntos de interés, correspondencia de puntos y reconstrucción de puntos; en el proceso de reconstrucción se determinan los parámetros que describen el movimiento (matriz de rotación R y el vector de traslación T entre las dos vistas. Esta secuencia de pasos se repite para todos los pares de vista sucesivas del conjunto. El segundo módulo tiene como objetivo crear el modelo 3D del objeto, para lo cual debe determinar el mapa total de todos los puntos 3D generados; en cada iteración del módulo anterior, una vez obtenido el mapa de profundidad total, genera la malla 3D, aplicando el método de triangulación de Delaunay [28]. Los resultados obtenidos del proceso de reconstrucción son modelados en un ambiente virtual VRML para obtener una visualización más realista del objeto.The system is composed of two main modules. The first one, carries out the image prosecution, whose objective is to determine the depth map of a pair of views where each pair of successive views continues a sequence of phases: interest points detection, points correspondence and points reconstruction; in the reconstruction process, is determined the parameters that describe the movement (rotation matrix R and the translation vector T between the two views. This an sequence of steps is repeated for all the peers of successive views of the set. The second module has as objective to create the 3D model of the object, for it should determine the total map of all the 3D points generated, by each iteration of the previous module, once obtained the map of total depth generates the 3D netting, applying the

  6. A primitive-based 3D object recognition system

    Science.gov (United States)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  7. 3D Spectroscopy of Herbig-Haro objects

    CERN Document Server

    López, R; Exter, K M; García-Lorenzo, B; Gómez, G; Meteorologia, D A; Riera, A; Sánchez, S F; Meteorologia, Departament d'Astronomia i

    2005-01-01

    HH 110 and HH 262 are two Herbig-Haro jets with rather peculiar, chaotic morphology. In the two cases, no source suitable to power the jet has been detected along the outflow, at optical or radio wavelengths. Both, previous data and theoretical models, suggest that these objects are tracing an early stage of an HH jet/dense cloud interaction. We present the first results of the integral field spectroscopy observations made with the PMAS spectrophotometer (with the PPAK configuration) of these two turbulent jets. New data of the kinematics in several characteristic HH emission lines are shown. In addition, line-ratio maps have been made, suitable to explore the spatial excitation an density conditions of the jets as a function of their kinematics.

  8. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  9. DiAna, an ImageJ tool for object-based 3D co-localization and distance analysis

    OpenAIRE

    2016-01-01

    International audience; We present a new plugin for ImageJ called DiAna, for Distance Analysis, which comes with a user-friendly interface. DiAna proposes robust and accurate 3D segmentation for object extraction. The plugin performs automated object-based co-localization and distance analysis. DiAna offers an in-depth analysis of co-localization between objects and retrieves 3D measurements including co-localizing volumes and surfaces of contact. It also computes the distribution of distance...

  10. OB3D, a new set of 3D Objects available for research: a web-based study

    Directory of Open Access Journals (Sweden)

    Stéphane eBuffat

    2014-10-01

    Full Text Available Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc.

  11. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography.

    Science.gov (United States)

    Hongen Liao; Dohi, Takeyoshi; Nomura, Keisuke

    2011-11-01

    We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.

  12. Depth propagation for semi-automatic 2D to 3D conversion

    Science.gov (United States)

    Tolstaya, Ekaterina; Pohl, Petr; Rychagov, Michael

    2015-03-01

    In this paper, we present a method for temporal propagation of depth data that is available for so called key-frames through video sequence. Our method requires that full frame depth information is assigned. Our method utilizes nearest preceding and nearest following key-frames with known depth information. The propagation of depth information from two sides is essential as it allows to solve most occlusion problems correctly. Image matching is based on the coherency sensitive hashing (CSH) method and is done using image pyramids. Disclosed results are compared with temporal interpolation based on motion vectors from optical flow algorithm. The proposed algorithm keeps sharp depth edges of objects even in situations with fast motion or occlusions. It also handles well many situations, when the depth edges don't perfectly correspond with true edges of objects.

  13. Single-pixel 3D imaging with time-based depth resolution

    CERN Document Server

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  14. SEE-THROUGH IMAGING OF LASER-SCANNED 3D CULTURAL HERITAGE OBJECTS BASED ON STOCHASTIC RENDERING OF LARGE-SCALE POINT CLOUDS

    OpenAIRE

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; R. Umegaki; Wang, S; M. Uemura(Hiroshima Astrophysical Science Center, Hiroshima University); Okamoto, A; Koyamada, K.

    2016-01-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminatin...

  15. Multi-layer 3D imaging using a few viewpoint images and depth map

    Science.gov (United States)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  16. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    CERN Document Server

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  17. An object-oriented 3D integral data model for digital city and digital mine

    Science.gov (United States)

    Wu, Lixin; Wang, Yanbing; Che, Defu; Xu, Lei; Chen, Xuexi; Jiang, Yun; Shi, Wenzhong

    2005-10-01

    With the rapid development of urban, city space extended from surface to subsurface. As the important data source for the representation of city spatial information, 3D city spatial data have the characteristics of multi-object, heterogeneity and multi-structure. It could be classified referring to the geo-surface into three kinds: above-surface data, surface data and subsurface data. The current research on 3D city spatial information system is divided naturally into two different branch, 3D City GIS (3D CGIS) and 3D Geological Modeling (3DGM). The former emphasizes on the 3D visualization of buildings and the terrain of city, while the latter emphasizes on the visualization of geological bodies and structures. Although, it is extremely important for city planning and construction to integrate all the city spatial information including above-surface, surface and subsurface objects to conduct integral analysis and spatial manipulation. However, either 3D CGIS or 3DGM is currently difficult to realize the information integration, integral analysis and spatial manipulation. Considering 3D spatial modeling theory and methodologies, an object-oriented 3D integral spatial data model (OO3D-ISDM) is presented and software realized. The model integrates geographical objects, surface buildings and geological objects together seamlessly with TIN being its coupling interface. This paper introduced the conceptual model of OO3D-ISDM, which is comprised of 4 spatial elements, i.e. point, line, face and body, and 4 geometric primitives, i.e. vertex, segment, triangle and generalized tri-prism (GTP). The spatial model represents the geometry of surface buildings and geographical objects with triangles, and geological objects with GTP. Any of the represented objects, no mater surface buildings, terrain or subsurface objects, could be described with the basic geometry element, i.e. triangle. So the 3D spatial objects, surface buildings, terrain and geological objects can be

  18. 3D depth image analysis for indoor fall detection of elderly people

    Directory of Open Access Journals (Sweden)

    Lei Yang

    2016-02-01

    Full Text Available This paper presents a new fall detection method of elderly people in a room environment based on shape analysis of 3D depth images captured by a Kinect sensor. Depth images are pre-processed by a median filter both for background and target. The silhouette of moving individual in depth images is achieved by a subtraction method for background frames. The depth images are converted to disparity map, which is obtained by the horizontal and vertical projection histogram statistics. The initial floor plane information is obtained by V disparity map, and the floor plane equation is estimated by the least square method. Shape information of human subject in depth images is analyzed by a set of moment functions. Coefficients of ellipses are calculated to determine the direction of individual. The centroids of the human body are calculated and the angle between the human body and the floor plane is calculated. When both the distance from the centroids of the human body to the floor plane and the angle between the human body and the floor plane are lower than some thresholds, fall incident will be detected. Experiments with different falling direction are performed. Experimental results show that the proposed method can detect fall incidents effectively.

  19. Estimation of calcaneal loading during standing from human footprint depths using 3D scanner

    Science.gov (United States)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Widodo, Achmad; Rahayu, Sri Puji

    2017-01-01

    This research studies the relationship between footprint depths and load in the calcaneal area when human standing in an upright posture. Footprint depths are deformation in the calcaneal area obtained from the z-value extraction of the Boolean operation acquired from unloaded foot scanning using 3D scanner and loaded foot using foot plantar scanner. To compare peak loads estimated from footprint depth maximum, force sensing resistor (FSR) sensor is attached over the shoe insole with zero heel height in the calcaneal area. Twenty participants were selected from students of Mechanical Engineering Department Diponegoro University with the average the age and the body weight 19.5 years and 55.27 kg respectively. Results that were relatively accurate was found on the calcaneal loading estimation by footprint depth is presented by curve and data distribution which are in good agreement with the result of the measurement. A significant difference in estimating calcaneal loading is mainly caused by plantar foot position of research subjects which is not perpendicular to foot ankle and hallux. In addition, plantar foot position which bends to front/back/side affects the result of footprint depths.

  20. A novel 2D-to-3D conversion technique based on relative height-depth cue

    Science.gov (United States)

    Jung, Yong Ju; Baik, Aron; Kim, Jiwon; Park, Dusik

    2009-02-01

    We present a simple depth estimation framework for 2D-to-3D media conversion. The perceptual depth information from monocular image is estimated by the optimal use of relative height cue, which is one of well-known depth recovery cues. The height depth cue is very common in photographic images. We propose a novel line tracing method and depth refinement filter as core of our depth estimation framework. The line tracing algorithm traces strong edge positions to generate an initial staircase depth map. The initial depth map is further improved by a recursive depth refinement filter. We present visual results from depth estimation and stereo image generation.

  1. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    Science.gov (United States)

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  2. Gravito-Turbulent Disks in 3D: Turbulent Velocities vs. Depth

    CERN Document Server

    Shi, Ji-Ming

    2014-01-01

    Characterizing turbulence in protoplanetary disks is crucial for understanding how they accrete and spawn planets. Recent measurements of spectral line broadening promise to diagnose turbulence, with different lines probing different depths. We use 3D local hydrodynamic simulations of cooling, self-gravitating disks to resolve how motions driven by "gravito-turbulence" vary with height. We find that gravito-turbulence is practically as vigorous at altitude as at depth: even though gas at altitude is much too rarefied to be itself self-gravitating, it is strongly forced by self-gravitating overdensities at the midplane. The long-range nature of gravity means that turbulent velocities are nearly uniform vertically, increasing by just a factor of 2 from midplane to surface, even as the density ranges over nearly three orders of magnitude. The insensitivity of gravito-turbulence to height contrasts with the behavior of disks afflicted by the magnetorotational instability (MRI); in the latter case, non-circular ve...

  3. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    Science.gov (United States)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  4. Constructing Isosurfaces from 3D Data Sets Taking Account of Depth Sorting of Polyhedra

    Institute of Scientific and Technical Information of China (English)

    周勇; 唐泽圣

    1994-01-01

    Creating and rendering intermediate geometric primitives is one of the approaches to visualisze data sets in 3D space.Some algorithms have been developed to construct isosurface from uniformly distributed 3D data sets.These algorithms assume that the function value varies linearly along edges of each cell.But to irregular 3D data sets,this assumption is inapplicable.Moreover,the detth sorting of cells is more complicated for irregular data sets,which is indispensable for generating isosurface images or semitransparent isosurface images,if Z-buffer method is not adopted.In this paper,isosurface models based on the assumption that the function value has nonlinear distribution within a tetrahedron are proposed.The depth sorting algorithm and data structures are developed for the irregular data sets in which cells may be subdivided into tetrahedra.The implementation issues of this algorithm are discussed and experimental results are shown to illustrate potentials of this technique.

  5. Neural Network Based Reconstruction of a 3D Object from a 2D Wireframe

    CERN Document Server

    Johnson, Kyle; Lipson, Hod

    2010-01-01

    We propose a new approach for constructing a 3D representation from a 2D wireframe drawing. A drawing is simply a parallel projection of a 3D object onto a 2D surface; humans are able to recreate mental 3D models from 2D representations very easily, yet the process is very difficult to emulate computationally. We hypothesize that our ability to perform this construction relies on the angles in the 2D scene, among other geometric properties. Being able to reproduce this reconstruction process automatically would allow for efficient and robust 3D sketch interfaces. Our research focuses on the relationship between 2D geometry observable in the sketch and 3D geometry derived from a potential 3D construction. We present a fully automated system that constructs 3D representations from 2D wireframes using a neural network in conjunction with a genetic search algorithm.

  6. Spherical blurred shape model for 3-D object and pose recognition: quantitative analysis and HCI applications in smart environments.

    Science.gov (United States)

    Lopes, Oscar; Reyes, Miguel; Escalera, Sergio; Gonzàlez, Jordi

    2014-12-01

    The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.

  7. Optimizing penetration depth, contrast, and resolution in 3D dermatologic OCT

    Science.gov (United States)

    Aneesh, Alex; Považay, Boris; Hofer, Bernd; Zhang, Edward Z.; Kendall, Catherine; Laufer, Jan; Popov, Sergei; Glittenberg, Carl; Binder, Susanne; Stone, Nicholas; Beard, Paul C.; Drexler, Wolfgang

    2010-02-01

    High speed, three-dimensional optical coherence tomography (3D OCT) at 800nm, 1060nm and 1300nm with approximately 4μm, 7μm and 6μm axial and less than 15μm transverse resolution is demonstrated to investigate the optimum wavelength region for in vivo human skin imaging in terms of contrast, dynamic range and penetration depth. 3D OCT at 1300nm provides deeper penetration, while images obtained at 800nm were better in terms of contrast and speckle noise. 1060nm region was a compromise between 800nm and 1300nm in terms of penetration depth and image contrast. Optimizing sensitivity, penetration and contrast enabled unprecedented visualization of micro-structural morphology underneath the glabrous skin, hairy skin and in scar tissue. Higher contrast obtained at 800 nm appears to be critical in the in vitro tumor study. A multimodal approach combining OCT and PA helped to obtain morphological as well as vascular information from deeper regions of skin.

  8. IMPROVEMENT OF 3D MONTE CARLO LOCALIZATION USING A DEPTH CAMERA AND TERRESTRIAL LASER SCANNER

    Directory of Open Access Journals (Sweden)

    S. Kanai

    2015-05-01

    Full Text Available Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  9. 3D video analysis of the novel object recognition test in rats.

    Science.gov (United States)

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity.

  10. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    Science.gov (United States)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  11. Towards Reliable Stereoscopic 3D Quality Evaluation: Subjective Assessment and Objective Metrics

    OpenAIRE

    Xing, Liyuan

    2013-01-01

    Stereoscopic three-dimensional (3D) services have become more popular recently amid promise of providing immersive quality of experience (QoE) to the end-users with the help of binocular depth. However, various arisen artifacts in the stereoscopic 3D processing chain might cause discomfort and severely degrade the QoE. Unfortunately, although the causes and nature of artifacts have already been clearly understood, it is impossible to eliminate them under the limitation of current stereoscopic...

  12. An efficient 3D traveltime calculation using coarse-grid mesh for shallow-depth source

    Science.gov (United States)

    Son, Woohyun; Pyun, Sukjoon; Lee, Ho-Young; Koo, Nam-Hyung; Shin, Changsoo

    2016-10-01

    3D Kirchhoff pre-stack depth migration requires an efficient algorithm to compute first-arrival traveltimes. In this paper, we exploited a wave-equation-based traveltime calculation algorithm, which is called the suppressed wave equation estimation of traveltime (SWEET), and the equivalent source distribution (ESD) algorithm. The motivation of using the SWEET algorithm is to solve the Laplace-domain wave equation using coarse grid spacing to calculate first-arrival traveltimes. However, if a real source is located at shallow-depth close to free surface, we cannot accurately calculate the wavefield using coarse grid spacing. So, we need an additional algorithm to correctly simulate the shallow source even for the coarse grid mesh. The ESD algorithm is a method to define a set of distributed nodal sources that approximate a point source at the inter-nodal location in a velocity model with large grid spacing. Thanks to the ESD algorithm, we can efficiently calculate the first-arrival traveltimes of waves emitted from shallow source point even when we solve the Laplace-domain wave equation using a coarse-grid mesh. The proposed algorithm is applied to the SEG/EAGE 3D salt model. From the result, we note that the combination of SWEET and ESD algorithms can be successfully used for the traveltime calculation under the condition of a shallow-depth source. We also confirmed that our algorithm using coarse-grid mesh requires less computational time than the conventional SWEET algorithm using relatively fine-grid mesh.

  13. Imaging the Juan de Fuca subduction plate using 3D Kirchoff Prestack Depth Migration

    Science.gov (United States)

    Cheng, C.; Bodin, T.; Allen, R. M.; Tauzin, B.

    2014-12-01

    We propose a new Receiver Function migration method to image the subducting plate in the western US that utilizes the US array and regional network data. While the well-developed CCP (common conversion point) poststack migration is commonly used for such imaging; our method applies a 3D prestack depth migration approach. The traditional CCP and post-stack depth mapping approaches implement the ray tracing and moveout correction for the incoming teleseismic plane wave based on a 1D earth reference model and the assumption of horizontal discontinuities. Although this works well in mapping the reflection position of relatively flat discontinuities (such as the Moho or the LAB), CCP is known to give poor results in the presence of lateral volumetric velocity variations and dipping layers. Instead of making the flat layer assumption and 1D moveout correction, seismic rays are traced in a 3D tomographic model with the Fast Marching Method. With travel time information stored, our Kirchoff migration is done where the amplitude of the receiver function at a given time is distributed over all possible conversion points (i.e. along a semi-elipse) on the output migrated depth section. The migrated reflectors will appear where the semicircles constructively interfere, whereas destructive interference will cancel out noise. Synthetic tests show that in the case of a horizontal discontinuity, the prestack Kirchoff migration gives similar results to CCP, but without spurious multiples as this energy is stacked destructively and cancels out. For 45 degree and 60 degree dipping discontinuities, it also performs better in terms of imaging at the right boundary and dip angle. This is especially useful in the Western US case, beneath which the Juan de Fuca plate subducted to ~450km with a dipping angle that may exceed 50 degree. While the traditional CCP method will underestimate the dipping angle, our proposed imaging method will provide an accurate 3D subducting plate image without

  14. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    Science.gov (United States)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  15. Phase unwrapping for large depth-of-field 3D laser holographic interferometry measurement of laterally discontinuous surfaces

    Science.gov (United States)

    Huang, Zhenhua; Shih, Albert J.; Ni, Jun

    2006-11-01

    A phase unwrapping method is developed to mathematically increase the depth-of-field for the 3D optical measurement of objects with laterally discontinuous surfaces, which contain disconnected high aspect ratio regions. This method is applied for laser holographic interferometry precision measurements. The phase wrap identification at boundary pixels, masking and recovery, dynamic segmentation and phase adjustment are developed to overcome the divergence problem in phase unwrapping of laterally discontinuous surfaces. An automotive automatic transmission valve body is applied as an example to demonstrate the developed method. Experimental results demonstrate that the proposed methods can efficiently unwrap the phase to increase the depth-of-field for laterally discontinuous surfaces. Effects of segment size and width of overlapped regions on the computational efficiency are investigated.

  16. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    Science.gov (United States)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  17. 3D receiver function Kirchhoff depth migration image of Cascadia subduction slab weak zone

    Science.gov (United States)

    Cheng, C.; Allen, R. M.; Bodin, T.; Tauzin, B.

    2016-12-01

    We have developed a highly computational efficient algorithm of applying 3D Kirchhoff depth migration to telesismic receiver function data. Combine primary PS arrival with later multiple arrivals we are able to reveal a better knowledge about the earth discontinuity structure (transmission and reflection). This method is highly useful compare with traditional CCP method when dipping structure is met during the imaging process, such as subduction slab. We apply our method to the reginal Cascadia subduction zone receiver function data and get a high resolution 3D migration image, for both primary and multiples. The image showed us a clear slab weak zone (slab hole) in the upper plate boundary under Northern California and the whole Oregon. Compare with previous 2D receiver function image from 2D array(CAFE and CASC93), the position of the weak zone shows interesting conherency. This weak zone is also conherent with local seismicity missing and heat rising, which lead us to think about and compare with the ocean plate stucture and the hydralic fluid process during the formation and migration of the subduction slab.

  18. Plasma penetration depth and mechanical properties of atmospheric plasma-treated 3D aramid woven composites

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Yao, L.; Xue, J.; Zhao, D.; Lan, Y.; Qian, X. [Key Laboratory of Textile Science and Technology, Donghua University, Ministry of Education (China); Department of Textile Materials Science and Product Design, College of Textiles, Donghua University, Shanghai 201620 (China); Wang, C.X. [Key Laboratory of Textile Science and Technology, Donghua University, Ministry of Education (China); Department of Textile Materials Science and Product Design, College of Textiles, Donghua University, Shanghai 201620 (China); College of Textiles and Clothing, Yancheng Institute of Technology, Jiangsu 224003 (China); Qiu, Y. [Key Laboratory of Textile Science and Technology, Donghua University, Ministry of Education (China); Department of Textile Materials Science and Product Design, College of Textiles, Donghua University, Shanghai 201620 (China)], E-mail: ypqiu@dhu.edu.cn

    2008-12-30

    Three-dimensional aramid woven fabrics were treated with atmospheric pressure plasmas, on one side or both sides to determine the plasma penetration depth in the 3D fabrics and the influences on final composite mechanical properties. The properties of the fibers from different layers of the single side treated fabrics, including surface morphology, chemical composition, wettability and adhesion properties were investigated using scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), contact angle measurement and microbond tests. Meanwhile, flexural properties of the composites reinforced with the fabrics untreated and treated on both sides were compared using three-point bending tests. The results showed that the fibers from the outer most surface layer of the fabric had a significant improvement in their surface roughness, chemical bonding, wettability and adhesion properties after plasma treatment; the treatment effect gradually diminished for the fibers in the inner layers. In the third layer, the fiber properties remained approximately the same to those of the control. In addition, three-point bending tests indicated that the 3D aramid composite had an increase of 11% in flexural strength and 12% in flexural modulus after the plasma treatment. These results indicate that composite mechanical properties can be improved by the direct fabric treatment instead of fiber treatment with plasmas if the fabric is less than four layers thick.

  19. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    Science.gov (United States)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  20. Novel 3-D Object Recognition Methodology Employing a Curvature-Based Histogram

    Directory of Open Access Journals (Sweden)

    Liang-Chia Chen

    2013-07-01

    Full Text Available In this paper, a new object recognition algorithm employing a curvature-based histogram is presented. Recognition of three-dimensional (3-D objects using range images remains one of the most challenging problems in 3-D computer vision due to its noisy and cluttered scene characteristics. The key breakthroughs for this problem mainly lie in defining unique features that distinguish the similarity among various 3-D objects. In our approach, an object detection scheme is developed to identify targets underlining an automated search in the range images using an initial process of object segmentation to subdivide all possible objects in the scenes and then applying a process of object recognition based on geometric constraints and a curvature-based histogram for object recognition. The developed method has been verified through experimental tests for its feasibility confirmation.

  1. Methodology for the Efficient Progressive Distribution and Visualization of 3D Building Objects

    Directory of Open Access Journals (Sweden)

    Bo Mao

    2016-10-01

    Full Text Available Three-dimensional (3D, city models have been applied in a variety of fields. One of the main problems in 3D city model utilization, however, is the large volume of data. In this paper, a method is proposed to generalize the 3D building objects in 3D city models at different levels of detail, and to combine multiple Levels of Detail (LODs for a progressive distribution and visualization of the city models. First, an extended structure for multiple LODs of building objects, BuildingTree, is introduced that supports both single buildings and building groups; second, constructive solid geometry (CSG representations of buildings are created and generalized. Finally, the BuildingTree is stored in the NoSQL database MongoDB for dynamic visualization requests. The experimental results indicate that the proposed progressive method can efficiently visualize 3D city models, especially for large areas.

  2. From reality to virtual reality: 3D object imaging techniques and algorithms

    Science.gov (United States)

    Sitnik, Robert; Kujawinska, Malgorzata

    2003-10-01

    General concept of 3D data processing path, which enables to introduce information about shape and texture of real 3D objects into complex virtual worlds, is presented. Minimal requirements for input data, in the most common case coming in the form of cloud of (x,y,z) co-ordinate points from 3D shape measurement systems, are specified with special emphasis on implementation of multidirectional data and texture information. The algorithms for data pre-processing like filtering, smoothing and simplification are introduced. The techniques for merging of directional data into single virtual object are also employed. The algorithm for triangulation of merged cloud of points to form virtual object accepted by multimedia environments is presented. The various techniques of texture creation and mapping are introduced. All steps are illustrated by measurement and processing of a representative 3D object for art applications.

  3. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    Science.gov (United States)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  4. Superquadric Similarity Measure with Spherical Harmonics in 3D Object Recognition

    Institute of Scientific and Technical Information of China (English)

    XINGWeiwei; LIUWeibin; YUANBaozong

    2005-01-01

    This paper proposes a novel approach for superquadric similarity measure in 3D object recognition. The 3D objects are represented by a composite volumetric representation of Superquadric (SQ)-based geons, which are the new and powerful volumetric models adequate for 3D recognition. The proposed approach is processed through three stages: first, a novel sampling algorithm is designed for searching Chebyshev nodes on superquadric surface to construct the discrete spherical function representing superquadric 3D shape; secondly, the fast Spherical Harmonic Transform is performed on the discrete spherical function to obtain the rotation invariant descriptor of superquadric; thirdly, the similarity of superquadrics is measured by computing the L2 difference between two obtained descriptors. In addition, an integrated processing framework is presented for 3D object recognition with SQ-based geons from the real 3D data, which implements the approach proposed in this paper for shape similarity measure between SQ-based geons. Evaluation experiments demonstrate that the proposed approach is very efficient and robust for similarity measure of superquadric models. The research lays a foundation for developing SQ-based 3D object recognition systems.

  5. Rotational Subgroup Voting and Pose Clustering for Robust 3D Object Recognition

    DEFF Research Database (Denmark)

    Buch, Anders Glent; Kiforenko, Lilita; Kraft, Dirk

    2017-01-01

    It is possible to associate a highly constrained subset of relative 6 DoF poses between two 3D shapes, as long as the local surface orientation, the normal vector, is available at every surface point. Local shape features can be used to find putative point correspondences between the models due...... estimation. We then apply our method to four state of the art data sets for 3D object recognition that contain occluded and cluttered scenes. Our method achieves perfect recall on two LIDAR data sets and outperforms competing methods on two RGB-D data sets, thus setting a new standard for general 3D object...

  6. Intuitiveness 3D objects Interaction in Augmented Reality Using S-PI Algorithm

    Directory of Open Access Journals (Sweden)

    Ajune Wanis Ismail

    2013-07-01

    Full Text Available Numbers of researchers have developed interaction techniques in Augmented Reality (AR application. Some of them proposed new technique for user interaction with different types of interfaces which could bring great promise for intuitive user interaction with 3D data naturally. This paper will explore the 3D object manipulation performs in single-point interaction (S-PI technique in AR environment. The new interaction algorithm, S-PI technique, is point-based intersection designed to detect the interaction’s behaviors such as translate, rotate, clone and for intuitive 3D object handling. The S-PI technique is proposed with marker-based tracking in order to improve the trade-off between the accuracy and speed in manipulating 3D object in real-time. The method is robust required to ensure both elements of real and virtual can be combined relative to the user’s viewpoints and reduce system lag.  

  7. Development of goniophotometric imaging system for recording reflectance spectra of 3D objects

    Science.gov (United States)

    Tonsho, Kazutaka; Akao, Y.; Tsumura, Norimichi; Miyake, Yoichi

    2001-12-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.

  8. Comparison of active SIFT-based 3 D object recognition algorithms

    CSIR Research Space (South Africa)

    Keaikitse, M

    2013-09-01

    Full Text Available Active object recognition aims to manipulate the sensor and its parameters, and interact with the environment and/or the object of interest in order to gather more information to complete the 3 D object recognition task as quickly and accurately...

  9. 3D shape measurement of macroscopic objects in digital off-axis holography using structured illumination.

    Science.gov (United States)

    Grosse, Marcus; Buehl, Johannes; Babovsky, Holger; Kiessling, Armin; Kowarschik, Richard

    2010-04-15

    We propose what we believe to be a novel approach to measure the 3D shape of arbitrary diffuse-reflecting macroscopic objects in holographic setups. Using a standard holographic setup, a second CCD and a liquid-crystal-on-silicon spatial light modulator to modulate the object wave, the method yields a dense 3D point cloud of an object or a scene. The calibration process is presented, and first quantitative results of a shape measurement are shown and discussed. Furthermore, a shape measurement of a complex object is displayed to demonstrate its universal use.

  10. The role of the foreshortening cue in the perception of 3D object slant.

    Science.gov (United States)

    Ivanov, Iliya V; Kramer, Daniel J; Mullen, Kathy T

    2014-01-01

    Slant is the degree to which a surface recedes or slopes away from the observer about the horizontal axis. The perception of surface slant may be derived from static monocular cues, including linear perspective and foreshortening, applied to single shapes or to multi-element textures. It is still unclear the extent to which color vision can use these cues to determine slant in the absence of achromatic contrast. Although previous demonstrations have shown that some pictures and images may lose their depth when presented at isoluminance, this has not been tested systematically using stimuli within the spatio-temporal passband of color vision. Here we test whether the foreshortening cue from surface compression (change in the ratio of width to length) can induce slant perception for single shapes for both color and luminance vision. We use radial frequency patterns with narrowband spatio-temporal properties. In the first experiment, both a manual task (lever rotation) and a visual task (line rotation) are used as metrics to measure the perception of slant for achromatic, red-green isoluminant and S-cone isolating stimuli. In the second experiment, we measure slant discrimination thresholds as a function of depicted slant in a 2AFC paradigm and find similar thresholds for chromatic and achromatic stimuli. We conclude that both color and luminance vision can use the foreshortening of a single surface to perceive slant, with performances similar to those obtained using other strong cues for slant, such as texture. This has implications for the role of color in monocular 3D vision, and the cortical organization used in 3D object perception. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    Institute of Scientific and Technical Information of China (English)

    ZHONG DengHua; LI MingChao; LIU Jie

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric engineering geology, a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then, according to the classified thought of the object-oriented technique, the different 3D models of geological and engineering objects were realized based on the data structure, including terrain class,strata class, fault class, and limit class; and the modeling mechanism was alternative. Finally, the 3D integrated model was established by Boolean operations between 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification, the arbitrary slicing analysis of the 3D model, the geological analysis of the dam, and underground engineering. They provide powerful theoretical principles and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  12. 3D integrated modeling approach to geo-engineering objects of hydraulic and hydroelectric projects

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Aiming at 3D modeling and analyzing problems of hydraulic and hydroelectric en-gineering geology,a complete scheme of solution is presented. The first basis was NURBS-TIN-BRep hybrid data structure. Then,according to the classified thought of the object-oriented technique,the different 3D models of geological and engi-neering objects were realized based on the data structure,including terrain class,strata class,fault class,and limit class;and the modeling mechanism was alterna-tive. Finally,the 3D integrated model was established by Boolean operations be-tween 3D geological objects and engineering objects. On the basis of the 3D model,a series of applied analysis techniques of hydraulic and hydroelectric engineering geology were illustrated. They include the visual modeling of rock-mass quality classification,the arbitrary slicing analysis of the 3D model,the geological analysis of the dam,and underground engineering. They provide powerful theoretical prin-ciples and technical measures for analyzing the geological problems encountered in hydraulic and hydroelectric engineering under complex geological conditions.

  13. A standardized set of 3-D objects for virtual reality research and applications.

    Science.gov (United States)

    Peeters, David

    2017-06-23

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  14. Liquid Phase 3D Printing for Quickly Manufacturing Metal Objects with Low Melting Point Alloy Ink

    CERN Document Server

    Wang, Lei

    2014-01-01

    Conventional 3D printings are generally time-consuming and printable metal inks are rather limited. From an alternative way, we proposed a liquid phase 3D printing for quickly making metal objects. Through introducing metal alloys whose melting point is slightly above room temperature as printing inks, several representative structures spanning from one, two and three dimension to more complex patterns were demonstrated to be quickly fabricated. Compared with the air cooling in a conventional 3D printing, the liquid-phase-manufacturing offers a much higher cooling rate and thus significantly improves the speed in fabricating metal objects. This unique strategy also efficiently prevents the liquid metal inks from air oxidation which is hard to avoid otherwise in an ordinary 3D printing. Several key physical factors (like properties of the cooling fluid, injection speed and needle diameter, types and properties of the printing ink, etc.) were disclosed which would evidently affect the printing quality. In addit...

  15. The Object Projection Feature Estimation Problem in Unsupervised Markerless 3D Motion Tracking

    CERN Document Server

    Quesada, Luis

    2011-01-01

    3D motion tracking is a critical task in many computer vision applications. Existing 3D motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on 3D motion tracking. 3D motion tracking systems that require no knowledge on the target object and run on a single low-budget camera require estimations of the object projection features (namely, area and position). In this paper, we define the object projection feature estimation problem and we present a novel 3D motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera, as installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a non-modeled unmarked object that may be non-rigid, non-convex, partially occluded, self occluded, or motion blurred, given that it is opaque, evenly colored, and enough contrasting with t...

  16. Web based Interactive 3D Learning Objects for Learning Management Systems

    Directory of Open Access Journals (Sweden)

    Stefan Hesse

    2012-02-01

    Full Text Available In this paper, we present an approach to create and integrate interactive 3D learning objects of high quality for higher education into a learning management system. The use of these resources allows to visualize topics, such as electro-technical and physical processes in the interior of complex devices. This paper addresses the challenge of combining rich interactivity and adequate realism with 3D exercise material for distance elearning.

  17. Intelligent multisensor concept for image-guided 3D object measurement with scanning laser radar

    Science.gov (United States)

    Weber, Juergen

    1995-08-01

    This paper presents an intelligent multisensor concept for measuring 3D objects using an image guided laser radar scanner. The field of application are all kinds of industrial inspection and surveillance tasks where it is necessary to detect, measure and recognize 3D objects in distances up to 10 m with high flexibility. Such applications might be the surveillance of security areas or container storages as well as navigation and collision avoidance of autonomous guided vehicles. The multisensor system consists of a standard CCD matrix camera and a 1D laser radar ranger which is mounted to a 2D mirror scanner. With this sensor combination it is possible to acquire gray scale intensity data as well as absolute 3D information. To improve the system performance and flexibility, the intensity data of the scene captured by the camera can be used to focus the measurement of the 3D sensor to relevant areas. The camera guidance of the laser scanner is useful because the acquisition of spatial information is relatively slow compared to the image sensor's ability to snap an image frame in 40 ms. Relevant areas in a scene are located by detecting edges of objects utilizing various image processing algorithms. The complete sensor system is controlled by three microprocessors carrying out the 3D data acquisition, the image processing tasks and the multisensor integration. The paper deals with the details of the multisensor concept. It describes the process of sensor guidance and 3D measurement and presents some practical results of our research.

  18. 3D Imaging of Dielectric Objects Buried under a Rough Surface by Using CSI

    Directory of Open Access Journals (Sweden)

    Evrim Tetik

    2015-01-01

    Full Text Available A 3D scalar electromagnetic imaging of dielectric objects buried under a rough surface is presented. The problem has been treated as a 3D scalar problem for computational simplicity as a first step to the 3D vector problem. The complexity of the background in which the object is buried is simplified by obtaining Green’s function of its background, which consists of two homogeneous half-spaces, and a rough interface between them, by using Buried Object Approach (BOA. Green’s function of the two-part space with planar interface is obtained to be used in the process. Reconstruction of the location, shape, and constitutive parameters of the objects is achieved by Contrast Source Inversion (CSI method with conjugate gradient. The scattered field data that is used in the inverse problem is obtained via both Method of Moments (MoM and Comsol Multiphysics pressure acoustics model.

  19. 3D Projection on Physical Objects: Design Insights from Five Real Life Cases

    DEFF Research Database (Denmark)

    Dalsgaard, Peter; Halskov, Kim

    2011-01-01

    3D projection on physical objects is a particular kind of Augmented Reality that augments a physical object by projecting digital content directly onto it, rather than by using a mediating device, such as a mobile phone or a head- mounted display. In this paper, we present five cases in which we...... have developed installations that employ 3D projection on physical objects. The installations have been developed in collaboration with external partners and have been put into use in real-life settings such as museums, exhibitions and interaction design laboratories. On the basis of these cases, we...... present and discuss three central design insights concerning new potentials for well-known 3D effects, dynamics between digital world and physical world, and relations between object, content and context....

  20. 2D virtual texture on 3D real object with coded structured light

    Science.gov (United States)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  1. 3D Projection on Physical Objects: Design Insights from Five Real Life Cases

    DEFF Research Database (Denmark)

    Dalsgaard, Peter; Halskov, Kim

    2011-01-01

    3D projection on physical objects is a particular kind of Augmented Reality that augments a physical object by projecting digital content directly onto it, rather than by using a mediating device, such as a mobile phone or a head- mounted display. In this paper, we present five cases in which we...... have developed installations that employ 3D projection on physical objects. The installations have been developed in collaboration with external partners and have been put into use in real-life settings such as museums, exhibitions and interaction design laboratories. On the basis of these cases, we...

  2. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Science.gov (United States)

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  3. Assessing nest-building behavior of mice using a 3D depth camera.

    Science.gov (United States)

    Okayama, Tsuyoshi; Goto, Tatsuhiko; Toyoda, Atsushi

    2015-08-15

    We developed a novel method to evaluate the nest-building behavior of mice using an inexpensive depth camera. The depth camera clearly captured nest-building behavior. Using three-dimensional information from the depth camera, we obtained objective features for assessing nest-building behavior, including "volume," "radius," and "mean height". The "volume" represents the change in volume of the nesting material, a pressed cotton square that a mouse shreds and untangles in order to build its nest. During the nest-building process, the total volume of cotton fragments is increased. The "radius" refers to the radius of the circle enclosing the fragments of cotton. It describes the extent of nesting material dispersion. The "radius" averaged approximately 60mm when a nest was built. The "mean height" represents the change in the mean height of objects. If the nest walls were high, the "mean height" was also high. These features provided us with useful information for assessment of nest-building behavior, similar to conventional methods for the assessment of nest building. However, using the novel method, we found that JF1 mice built nests with higher walls than B6 mice, and B6 mice built nests faster than JF1 mice. Thus, our novel method can evaluate the differences in nest-building behavior that cannot be detected or quantified by conventional methods. In future studies, we will evaluate nest-building behaviors of genetically modified, as well as several inbred, strains of mice, with several nesting materials.

  4. Ultrasonic cleaning of 3D printed objects and Cleaning Challenge Devices

    NARCIS (Netherlands)

    Verhaagen, Bram; Zanderink, Thijs; Fernandez Rivas, David

    2016-01-01

    We report our experiences in the evaluation of ultrasonic cleaning processes of objects made with additive manufacturing techniques, specifically three-dimensional (3D) printers. These objects need to be cleaned of support material added during the printing process. The support material can be remov

  5. Ultrasonic cleaning of 3D printed objects and Cleaning Challenge Devices

    NARCIS (Netherlands)

    Verhaagen, B.; Zanderink, Thijs; Fernandez Rivas, David

    2016-01-01

    We report our experiences in the evaluation of ultrasonic cleaning processes of objects made with additive manufacturing techniques, specifically three-dimensional (3D) printers. These objects need to be cleaned of support material added during the printing process. The support material can be remov

  6. Ultrasonic cleaning of 3D printed objects and Cleaning Challenge Devices

    NARCIS (Netherlands)

    Verhaagen, B.; Zanderink, Thijs; Fernandez Rivas, David

    2016-01-01

    We report our experiences in the evaluation of ultrasonic cleaning processes of objects made with additive manufacturing techniques, specifically three-dimensional (3D) printers. These objects need to be cleaned of support material added during the printing process. The support material can be

  7. The impact of stereo 3D sports TV broadcasts on user's depth perception and spatial presence experience

    Science.gov (United States)

    Weigelt, K.; Wiemeyer, J.

    2014-03-01

    This work examines the impact of content and presentation parameters in 2D versus 3D on depth perception and spatial presence, and provides guidelines for stereoscopic content development for 3D sports TV broadcasts and cognate subjects. Under consideration of depth perception and spatial presence experience, a preliminary study with 8 participants (sports: soccer and boxing) and a main study with 31 participants (sports: soccer and BMX-Miniramp) were performed. The dimension (2D vs. 3D) and camera position (near vs. far) were manipulated for soccer and boxing. In addition for soccer, the field of view (small vs. large) was examined. Moreover, the direction of motion (horizontal vs. depth) was considered for BMX-Miniramp. Subjective assessments, behavioural tests and qualitative interviews were implemented. The results confirm a strong effect of 3D on both depth perception and spatial presence experience as well as selective influences of camera distance and field of view. The results can improve understanding of the perception and experience of 3D TV as a medium. Finally, recommendations are derived on how to use various 3D sports ideally as content for TV broadcasts.

  8. 3D terahertz synthetic aperture imaging of objects with arbitrary boundaries

    Science.gov (United States)

    Kniffin, G. P.; Zurk, L. M.; Schecklman, S.; Henry, S. C.

    2013-09-01

    Terahertz (THz) imaging has shown promise for nondestructive evaluation (NDE) of a wide variety of manufactured products including integrated circuits and pharmaceutical tablets. Its ability to penetrate many non-polar dielectrics allows tomographic imaging of an object's 3D structure. In NDE applications, the material properties of the target(s) and background media are often well-known a priori and the objective is to identify the presence and/or 3D location of structures or defects within. The authors' earlier work demonstrated the ability to produce accurate 3D images of conductive targets embedded within a high-density polyethylene (HDPE) background. That work assumed a priori knowledge of the refractive index of the HDPE as well as the physical location of the planar air-HDPE boundary. However, many objects of interest exhibit non-planar interfaces, such as varying degrees of curvature over the extent of the surface. Such irregular boundaries introduce refraction effects and other artifacts that distort 3D tomographic images. In this work, two reconstruction techniques are applied to THz synthetic aperture tomography; a holographic reconstruction method that accurately detects the 3D location of an object's irregular boundaries, and a split­-step Fourier algorithm that corrects the artifacts introduced by the surface irregularities. The methods are demonstrated with measurements from a THz time-domain imaging system.

  9. 3-D Laser-Based Multiclass and Multiview Object Detection in Cluttered Indoor Scenes.

    Science.gov (United States)

    Zhang, Xuesong; Zhuang, Yan; Hu, Huosheng; Wang, Wei

    2017-01-01

    This paper investigates the problem of multiclass and multiview 3-D object detection for service robots operating in a cluttered indoor environment. A novel 3-D object detection system using laser point clouds is proposed to deal with cluttered indoor scenes with a fewer and imbalanced training data. Raw 3-D point clouds are first transformed to 2-D bearing angle images to reduce the computational cost, and then jointly trained multiple object detectors are deployed to perform the multiclass and multiview 3-D object detection. The reclassification technique is utilized on each detected low confidence bounding box in the system to reduce false alarms in the detection. The RUS-SMOTEboost algorithm is used to train a group of independent binary classifiers with imbalanced training data. Dense histograms of oriented gradients and local binary pattern features are combined as a feature set for the reclassification task. Based on the dalian university of technology (DUT)-3-D data set taken from various office and household environments, experimental results show the validity and good performance of the proposed method.

  10. Three-dimensional object recognition using gradient descent and the universal 3-D array grammar

    Science.gov (United States)

    Baird, Leemon C., III; Wang, Patrick S. P.

    1992-02-01

    A new algorithm is presented for applying Marill's minimum standard deviation of angles (MSDA) principle for interpreting line drawings without models. Even though no explicit models or additional heuristics are included, the algorithm tends to reach the same 3-D interpretations of 2-D line drawings that humans do. Marill's original algorithm repeatedly generated a set of interpretations and chose the one with the lowest standard deviation of angles (SDA). The algorithm presented here explicitly calculates the partial derivatives of SDA with respect to all adjustable parameters, and follows this gradient to minimize SDA. For a picture with lines meeting at m points forming n angles, the gradient descent algorithm requires O(n) time to adjust all the points, while the original algorithm required O(mn) time to do so. For the pictures described by Marill, this gradient descent algorithm running on a Macintosh II was found to be one to two orders of magnitude faster than the original algorithm running on a Symbolics, while still giving comparable results. Once the 3-D interpretation of the line drawing has been found, the 3-D object can be reduced to a description string using the Universal 3-D Array Grammar. This is a general grammar which allows any connected object represented as a 3-D array of pixels to be reduced to a description string. The algorithm based on this grammar is well suited to parallel computation, and could run efficiently on parallel hardware. This paper describes both the MSDA gradient descent algorithm and the Universal 3-D Array Grammar algorithm. Together, they transform a 2-D line drawing represented as a list of line segments into a string describing the 3-D object pictured. The strings could then be used for object recognition, learning, or storage for later manipulation.

  11. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    Science.gov (United States)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ˜50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  12. A convolutional learning system for object classification in 3-D Lidar data.

    Science.gov (United States)

    Prokhorov, Danil

    2010-05-01

    In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.

  13. 3D-Web-GIS RFID location sensing system for construction objects.

    Science.gov (United States)

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  14. Robust Stereo-Vision Based 3D Object Reconstruction for the Assistive Robot FRIEND

    Directory of Open Access Journals (Sweden)

    COJBASIC, Z.

    2011-11-01

    Full Text Available A key requirement of assistive robot vision is the robust 3D object reconstruction in complex environments for reliable autonomous object manipulation. In this paper the idea is presented of achieving high robustness of a complete robot vision system against external influences such as variable illumination by including feedback control of the object segmentation in stereo images. The approach used is to change the segmentation parameters in closed-loop so that object features extraction is driven to a desired result. Reliable feature extraction is necessary to fully exploit a neuro-fuzzy classifier which is the core of the proposed 2D object recognition method, predecessor of 3D object reconstruction. Experimental results on the rehabilitation assistive robotic system FRIEND demonstrate the effectiveness of the proposed method.

  15. Digital Curvatures Applied to 3D Object Analysis and Recognition: A Case Study

    CERN Document Server

    Chen, Li

    2009-01-01

    In this paper, we propose using curvatures in digital space for 3D object analysis and recognition. Since direct adjacency has only six types of digital surface points in local configurations, it is easy to determine and classify the discrete curvatures for every point on the boundary of a 3D object. Unlike the boundary simplicial decomposition (triangulation), the curvature can take any real value. It sometimes makes difficulties to find a right value for threshold. This paper focuses on the global properties of categorizing curvatures for small regions. We use both digital Gaussian curvatures and digital mean curvatures to 3D shapes. This paper proposes a multi-scale method for 3D object analysis and a vector method for 3D similarity classification. We use these methods for face recognition and shape classification. We have found that the Gaussian curvatures mainly describe the global features and average characteristics such as the five regions of a human face. However, mean curvatures can be used to find ...

  16. Extracting Superquadric-based Geon Description for 3D Object Recognition

    Institute of Scientific and Technical Information of China (English)

    XINGWeiwei; LIUWeibin; YUANBaozong

    2005-01-01

    Geons recognition is one key issue in developing 3D object recognition system based on Recognition by components (RBC) theory. In this paper, we present a novel approach for extracting superquadric-based geon description of 3D volumetric primitives from real shape data, which integrates the advantages of deformable superquadric models reconstruction and SVM-based classification. First, Real-coded genetic algorithm (RCGA) is used for superquadric fitting to 3D data and the quantitative parametric information is obtained; then a new sophisticated feature set is derived from superquadric parameters obtained for the next step; and SVM-based classification is proposed and implemented for geons recognition and the qualitative geometric information is obtained. Furthermore, the knowledge-based feedback of SVM network is introduced for improving the classification performance. Ex-perimental results obtained show that our approach is efficient and precise for extracting superquadric-based geon description from real shape data in 3D object recognition. The results are very encouraging and have significant benefit for developing the general 3D object recognition system.

  17. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Science.gov (United States)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  18. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  19. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  20. Identification of superficial defects in reconstructed 3D objects using phase-shifting fringe projection

    Science.gov (United States)

    Madrigal, Carlos A.; Restrepo, Alejandro; Branch, John W.

    2016-09-01

    3D reconstruction of small objects is used in applications of surface analysis, forensic analysis and tissue reconstruction in medicine. In this paper, we propose a strategy for the 3D reconstruction of small objects and the identification of some superficial defects. We applied a technique of projection of structured light patterns, specifically sinusoidal fringes and an algorithm of phase unwrapping. A CMOS camera was used to capture images and a DLP digital light projector for synchronous projection of the sinusoidal pattern onto the objects. We implemented a technique based on a 2D flat pattern as calibration process, so the intrinsic and extrinsic parameters of the camera and the DLP were defined. Experimental tests were performed in samples of artificial teeth, coal particles, welding defects and surfaces tested with Vickers indentation. Areas less than 5cm were studied. The objects were reconstructed in 3D with densities of about one million points per sample. In addition, the steps of 3D description, identification of primitive, training and classification were implemented to recognize defects, such as: holes, cracks, roughness textures and bumps. We found that pattern recognition strategies are useful, when quality supervision of surfaces has enough quantities of points to evaluate the defective region, because the identification of defects in small objects is a demanding activity of the visual inspection.

  1. Depth migration and de-migration for 3-D migration velocity analysis; Migration profondeur et demigration pour l'analyse de vitesse de migration 3D

    Energy Technology Data Exchange (ETDEWEB)

    Assouline, F.

    2001-07-01

    3-D seismic imaging of complex geologic structures requires the use of pre-stack imaging techniques, the post-stack ones being unsuitable in that case. Indeed, pre-stack depth migration is a technique which allows to image accurately complex structures provided that we have at our disposal a subsurface velocity model accurate enough. The determination of this velocity model is thus a key element for seismic imaging, and to this end, migration velocity analysis methods have met considerable interest. The SMART method is a specific migration velocity analysis method: the singularity of this method is that it does not rely on any restrictive assumptions on the complexity of the velocity model to determine. The SMART method uses a detour through the pre-stack depth migrated domain for extracting multi-offset kinematic information hardly accessible in the time domain. Once achieved the interpretation of the pre-stack depth migrated seismic data, a kinematic de-migration technique of the interpreted events enables to obtain a consistent kinematic database (i.e. reflection travel-times). Then, the inversion of these travel-times, by means of reflection tomography, allows the determination of an accurate velocity model. To be able to really image geologic structures for which the 3-D feature is predominant, we have studied the implementation of migration velocity analysis in 3-D in the context of the SMART method, and more generally, we have developed techniques allowing to overcome the intrinsic difficulties in the 3-D aspects of seismic imaging. Indeed, although formally the SMART method can be directly applied to the case of 3-D complex structures, the feasibility of its implementation requires to choose well the imaging domain. Once this choice done, it is also necessary to conceive a method allowing, via the associated de-migration, to obtain the reflection travel-times. We first consider the offset domain which constitutes, still today, the strategy most usually used

  2. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Science.gov (United States)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  3. Estimating 3D Object Parameters from 2D Grey-Level Images

    NARCIS (Netherlands)

    Houkes, Zweitze

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts im

  4. A robust algorithm for estimation of depth map for 3D shape recovery

    Science.gov (United States)

    Malik, Aamir Saeed; Choi, Tae-Sun

    2006-02-01

    Three-dimensional shape recovery from one or multiple observations is a challenging problem of computer vision. In this paper, we present a new focus measure for calculation of depth map. That depth map can further be used in techniques and algorithms leading to recovery of three dimensional structure of object which is required in many high level vision applications. The focus measure presented has shown robustness in presence of noise as compared to the earlier focus measures. This new focus measure is based on an optical transfer function using Discrete Cosine Transform and its results are compared with the earlier focus measures including Sum of Modified Laplacian (SML) and Tenenbaum focus measures. With this new focus measure, the results without any noise are almost similar in nature to the earlier focus measures however drastic improvement is observed with respect to others in the presence of noise. The proposed focus measure is applied on a test image, on a sequence of 97 simulated cone images and on a sequence of 97 real cone images. The images were added with the Gaussian noise which arises due to factors such as electronic circuit noise and sensor noise due to poor illumination and/or high temperature.

  5. 3D measurement of large-scale object using independent sensors

    Science.gov (United States)

    Yong, Liu; Yuan, Jia; Yong, Jiang; Luo, Xia

    2017-05-01

    Registration local sets of points for obtaining one final data set is a vital technology in 3D measurement of large-scale objects. In this paper, a new optical 3D measurement system using finge projection is presented, which is divided into four parts, including moving device, linking camera, stereo cameras and projector. Controlled by a computer, a sequence of local sets of points can be obtained based on temporal phase unwrapping and stereo vision. Two basic principles of place dependance and phase dependance are used to register these local sets of points into one final data set, and bundle adjustment is used to eliminate registration errors.

  6. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  7. 3D printing and IoT for personalized everyday objects in nursing and healthcare

    Science.gov (United States)

    Asano, Yoshihiro; Tanaka, Hiroya; Miyagawa, Shoko; Yoshioka, Junki

    2017-04-01

    Today, application of 3D printing technology for medical use is getting popular. It strongly helps to make complicated shape of body parts with functional materials. We can complement injured, weakened or lacked parts, and recover original shape and functions. However, these cases are mainly focusing on the symptom itself, not on everyday lives of patients. With life span extending, many of us will live a life with chronic disease for long time. Then, we should think about our living environment more carefully. For example, we can make personalized everyday objects and support their body and mind. Therefore, we use 3D printing for making everyday objects from nursing / healthcare perspective. In this project, we have 2 main research questions. The first one is how to make objects which patients really require. We invited many kinds of people such as engineer, nurses and patients to our research activity. Nurses can find patient's real demands firstly, and engineers support them with rapid prototyping. Finally, we found the best collaboration methodologies among nurses, engineers and patients. The second question is how to trace and evaluate usages of created objects. Apparently, it's difficult to monitor user's activity for a long time. So we're developing the IoT sensing system, which monitor activities remotely. We enclose a data logger which can lasts about one month with 3D printed objects. After one month, we can pick up the data from objects and understand how it has been used.

  8. Object-shape recognition and 3D reconstruction from tactile sensor images.

    Science.gov (United States)

    Khasnobish, Anwesha; Singh, Garima; Jati, Arindam; Konar, Amit; Tibarewala, D N

    2014-04-01

    This article presents a novel approach of edged and edgeless object-shape recognition and 3D reconstruction from gradient-based analysis of tactile images. We recognize an object's shape by visualizing a surface topology in our mind while grasping the object in our palm and also taking help from our past experience of exploring similar kind of objects. The proposed hybrid recognition strategy works in similar way in two stages. In the first stage, conventional object-shape recognition using linear support vector machine classifier is performed where regional descriptors features have been extracted from the tactile image. A 3D shape reconstruction is also performed depending upon the edged or edgeless objects classified from the tactile images. In the second stage, the hybrid recognition scheme utilizes the feature set comprising both the previously obtained regional descriptors features and some gradient-related information from the reconstructed object-shape image for the final recognition in corresponding four classes of objects viz. planar, one-edged object, two-edged object and cylindrical objects. The hybrid strategy achieves 97.62 % classification accuracy, while the conventional recognition scheme reaches only to 92.60 %. Moreover, the proposed algorithm has been proved to be less noise prone and more statistically robust.

  9. Depth migration and de-migration for 3-D migration velocity analysis; Migration profondeur et demigration pour l'analyse de vitesse de migration 3D

    Energy Technology Data Exchange (ETDEWEB)

    Assouline, F.

    2001-07-01

    3-D seismic imaging of complex geologic structures requires the use of pre-stack imaging techniques, the post-stack ones being unsuitable in that case. Indeed, pre-stack depth migration is a technique which allows to image accurately complex structures provided that we have at our disposal a subsurface velocity model accurate enough. The determination of this velocity model is thus a key element for seismic imaging, and to this end, migration velocity analysis methods have met considerable interest. The SMART method is a specific migration velocity analysis method: the singularity of this method is that it does not rely on any restrictive assumptions on the complexity of the velocity model to determine. The SMART method uses a detour through the pre-stack depth migrated domain for extracting multi-offset kinematic information hardly accessible in the time domain. Once achieved the interpretation of the pre-stack depth migrated seismic data, a kinematic de-migration technique of the interpreted events enables to obtain a consistent kinematic database (i.e. reflection travel-times). Then, the inversion of these travel-times, by means of reflection tomography, allows the determination of an accurate velocity model. To be able to really image geologic structures for which the 3-D feature is predominant, we have studied the implementation of migration velocity analysis in 3-D in the context of the SMART method, and more generally, we have developed techniques allowing to overcome the intrinsic difficulties in the 3-D aspects of seismic imaging. Indeed, although formally the SMART method can be directly applied to the case of 3-D complex structures, the feasibility of its implementation requires to choose well the imaging domain. Once this choice done, it is also necessary to conceive a method allowing, via the associated de-migration, to obtain the reflection travel-times. We first consider the offset domain which constitutes, still today, the strategy most usually used

  10. Recognition of 3D objects for autonomous mobile robot's navigation in automated shipbuilding

    Science.gov (United States)

    Lee, Hyunki; Cho, Hyungsuck

    2007-10-01

    Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation. However, the painting automation is necessary, because it can provide consistent performance of painting film thickness. Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively. Until now many object recognition algorithms have been studied, especially 2D object recognition methods using intensity image have been widely studied. However, in our case environmental illumination does not exist, so these methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm, the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to verify the effectiveness of the proposed algorithm.

  11. Refinement of falsified depth maps for the SwissRanger time-of-flight 3D camera on autonomous robots

    CSIR Research Space (South Africa)

    Osunmakinde, IO

    2010-11-01

    Full Text Available Robot navigation depends on accurate scene analysis by a camera using its data. This paper investigates a refinement of the inherent falsified depth maps generated from a 3D SwissRanger camera in the emission of beams of rays through a modulated...

  12. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    Science.gov (United States)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  13. On 3D simulation of moving objects in a digital earth system

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    "How do the rescue helicopters find out an optimized path to arrive at the site of a disaster as soon as possible?" or "How are the flight procedures over mountains and plateaus simulated?" and so on.In this paper a script language on spatial moving objects is presented by abstracting 3D spatial moving objects’ behavior when implementing moving objects simulation in 3D digital Earth scene,which is based on a platform of digital China named "ChinaStar".The definition of this script language,its morphology and syntax,its compiling and mediate language generating,and the behavior and state control of spatial moving objects are discussed emphatically.In addition,the language’s applications and implementation are also discussed.

  14. Shape and deformation measurements of 3D objects using volume speckle field and phase retrieval

    DEFF Research Database (Denmark)

    Anand, A; Chhaniwal, VK; Almoro, Percival;

    2009-01-01

    Shape and deformation measurement of diffusely reflecting 3D objects are very important in many application areas, including quality control, nondestructive testing, and design. When rough objects are exposed to coherent beams, the scattered light produces speckle fields. A method to measure...... the shape and deformation of 3D objects from the sequential intensity measurements of volume speckle field and phase retrieval based on angular-spectrum propagation technique is described here. The shape of a convex spherical surface was measured directly from the calculated phase map, and micrometer......-sized deformation induced on a metal sheet was obtained upon subtraction of the phase, corresponding to unloaded and loaded states. Results from computer simulations confirm the experiments. (C) 2009 Optical Society of America....

  15. Full-viewpoint 3D Space Object Recognition Based on Kernel Locality Preserving Projections

    Institute of Scientific and Technical Information of China (English)

    Meng Gang; Jiang Zhiguo; Liu Zhengyi; Zhang Haopeng; Zhao Danpei

    2010-01-01

    Space object recognition plays an important role in spatial exploitation and surveillance,followed by two main problems:lacking of data and drastic changes in viewpoints.In this article,firstly,we build a three-dimensional (3D) satellites dataset named BUAA Satellite Image Dataset (BUAA-SID 1.0) to supply data for 3D space object research.Then,based on the dataset,we propose to recognize full-viewpoint 3D space objects based on kemel locality preserving projections (KLPP).To obtain more accurate and separable description of the objects,firstly,we build feature vectors employing moment invariants,Fourier descriptors,region covariance and histogram of oriented gradients.Then,we map the features into kernel space followed by dimensionality reduction using KLPP to obtain the submanifold of the features.At last,k-nearest neighbor (kNN) is used to accomplish the classification.Experimental results show that the proposed approach is more appropriate for space object recognition mainly considering changes of viewpoints.Encouraging recognition rate could be obtained based on images in BUAA-SID 1.0,and the highest recognition result could achieve 95.87%.

  16. An approach to detecting deliberately introduced defects and micro-defects in 3D printed objects

    Science.gov (United States)

    Straub, Jeremy

    2017-05-01

    In prior work, Zeltmann, et al. demonstrated the negative impact that can be created by defects of various sizes in 3D printed objects. These defects may make the object unsuitable for its application or even present a hazard, if the object is being used for a safety-critical application. With the uses of 3D printing proliferating and consumer access to printers increasing, the desire of a nefarious individual or group to subvert the desired printing quality and safety attributes of a printer or printed object must be considered. Several different approaches to subversion may exist. Attackers may physically impair the functionality of the printer or launch a cyber-attack. Detecting introduced defects, from either attack, is critical to maintaining public trust in 3D printed objects and the technology. This paper presents an alternate approach. It applies a quality assurance technology based on visible light sensing to this challenge and assesses its capability for detecting introduced defects of multiple sizes.

  17. Detection and Purging of Specular Reflective and Transparent Object Influences in 3d Range Measurements

    Science.gov (United States)

    Koch, R.; May, S.; Nüchter, A.

    2017-02-01

    3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3DReflection- Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It

  18. From 2D Silhouettes to 3D Object Retrieval: Contributions and Benchmarking

    Directory of Open Access Journals (Sweden)

    Napoléon Thibault

    2010-01-01

    Full Text Available 3D retrieval has recently emerged as an important boost for 2D search techniques. This is mainly due to its several complementary aspects, for instance, enriching views in 2D image datasets, overcoming occlusion and serving in many real-world applications such as photography, art, archeology, and geolocalization. In this paper, we introduce a complete "2D photography to 3D object" retrieval framework. Given a (collection of picture(s or sketch(es of the same scene or object, the method allows us to retrieve the underlying similar objects in a database of 3D models. The contribution of our method includes (i a generative approach for alignment able to find canonical views consistently through scenes/objects and (ii the application of an efficient but effective matching method used for ranking. The results are reported through the Princeton Shape Benchmark and the Shrec benchmarking consortium evaluated/compared by a third party. In the two gallery sets, our framework achieves very encouraging performance and outperforms the other runs.

  19. Retrieval of 3D-Position af a Passive Object Using Infrared LED's and Photodiodes

    DEFF Research Database (Denmark)

    Christensen, Henrik Vie

    2005-01-01

    A sensor using infrared emitter/receiver pairs to determine the position of a passive object is presented. An array with a small number of infrared emitter/receiver pairs are proposed as sensing part to acquire information on the object position. The emitters illuminates the object and the intens...... experiments shows good accordance between actual and retrieved positions when tracking a ball. The ball has been successfully replaced by a human hand, and a "3D non-touch screen" with a human hand as "pointing device" is shown possible....

  20. Optometric Measurements Predict Performance but not Comfort on a Virtual Object Placement Task with a Stereoscopic 3D Display

    Science.gov (United States)

    2014-09-16

    with a Stereoscopic 3D Display John P. McIntire*, Steven T. Wright**, Lawrence K. Harrington***, Paul R. Havig*, Scott N. J. Watamaniuk****, and...SUBTITLE Optometric Measurements Predict Performance but not Comfort on a Virtual Object Placement Task with a Stereoscopic 3D Display 5a. CONTRACT...tested on a simple virtual object precision placement task while viewing a stereoscopic 3D (S3D) display. Inclusion criteria included uncorrected or

  1. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    Science.gov (United States)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  2. Computation of Edge-Edge-Edge Events Based on Conicoid Theory for 3-D Object Recognition

    Institute of Scientific and Technical Information of China (English)

    WU Chenye; MA Huimin

    2009-01-01

    The availability of a good viewpoint space partition is crucial in three dimensional (3-D) object rec-ognition on the approach of aspect graph. There are two important events depicted by the aspect graph ap-proach, edge-edge-edge (EEE) events and edge-vertex (EV) events. This paper presents an algorithm to compute EEE events by characteristic analysis based on conicoid theory, in contrast to current algorithms that focus too much on EV events and often overlook the importance of EEE events. Also, the paper provides a standard flowchart for the viewpoint space partitioning based on aspect graph theory that makes it suitable for perspective models. The partitioning result best demonstrates the algorithm's efficiency with more valu-able viewpoints found with the help of EEE events, which can definitely help to achieve high recognition rate for 3-D object recognition.

  3. Local shape feature fusion for improved matching, pose estimation and 3D object recognition

    DEFF Research Database (Denmark)

    Buch, Anders Glent; Petersen, Henrik Gordon; Krüger, Norbert

    2016-01-01

    We provide new insights to the problem of shape feature description and matching, techniques that are often applied within 3D object recognition pipelines. We subject several state of the art features to systematic evaluations based on multiple datasets from different sources in a uniform manner...... feature matches with a limited processing overhead. Our fused feature matches provide a significant increase in matching accuracy, which is consistent over all tested datasets. Finally, we benchmark all features in a 3D object recognition setting, providing further evidence of the advantage of fused....... We have carefully prepared and performed a neutral test on the datasets for which the descriptors have shown good recognition performance. Our results expose an important fallacy of previous results, namely that the performance of the recognition system does not correlate well with the performance...

  4. Encryption of digital hologram of 3-D object by virtual optics

    Science.gov (United States)

    Kim, Hyun; Kim, Do-Hyung; Lee, Yeon H.

    2004-10-01

    We present a simple technique to encrypt a digital hologram of a three-dimensional (3-D) object into a stationary white noise by use of virtual optics and then to decrypt it digitally. In this technique the digital hologram is encrypted by our attaching a computer-generated random phase key to it and then forcing them to Fresnel propagate to an arbitrary plane with an illuminating plane wave of a given wavelength. It is shown in experiments that the proposed system is robust to blind decryptions without knowing the correct propagation distance, wavelength, and phase key used in the encryption. Signal-to-noise ratio (SNR) and mean-square-error (MSE) of the reconstructed 3-D object are calculated for various decryption distances and wavelengths, and partial use of the correct phase key.

  5. Cryo-EM structure of a 3D DNA-origami object

    Science.gov (United States)

    Bai, Xiao-chen; Martin, Thomas G.; Scheres, Sjors H. W.; Dietz, Hendrik

    2012-01-01

    A key goal for nanotechnology is to design synthetic objects that may ultimately achieve functionalities known today only from natural macromolecular complexes. Molecular self-assembly with DNA has shown potential for creating user-defined 3D scaffolds, but the level of attainable positional accuracy has been unclear. Here we report the cryo-EM structure and a full pseudoatomic model of a discrete DNA object that is almost twice the size of a prokaryotic ribosome. The structure provides a variety of stable, previously undescribed DNA topologies for future use in nanotechnology and experimental evidence that discrete 3D DNA scaffolds allow the positioning of user-defined structural motifs with an accuracy that is similar to that observed in natural macromolecules. Thereby, our results indicate an attractive route to fabricate nanoscale devices that achieve complex functionalities by DNA-templated design steered by structural feedback. PMID:23169645

  6. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    CERN Document Server

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  7. Improving object detection in 2D images using a 3D world model

    Science.gov (United States)

    Viggh, Herbert E. M.; Cho, Peter L.; Armstrong-Crews, Nicholas; Nam, Myra; Shah, Danelle C.; Brown, Geoffrey E.

    2014-05-01

    A mobile robot operating in a netcentric environment can utilize offboard resources on the network to improve its local perception. One such offboard resource is a world model built and maintained by other sensor systems. In this paper we present results from research into improving the performance of Deformable Parts Model object detection algorithms by using an offboard 3D world model. Experiments were run for detecting both people and cars in 2D photographs taken in an urban environment. After generating candidate object detections, a 3D world model built from airborne Light Detection and Ranging (LIDAR) and aerial photographs was used to filter out false alarm using several types of geometric reasoning. Comparison of the baseline detection performance to the performance after false alarm filtering showed a significant decrease in false alarms for a given probability of detection.

  8. Towards a Stable Robotic Object Manipulation Through 2D-3D Features Tracking

    Directory of Open Access Journals (Sweden)

    Sorin M. Grigorescu

    2013-04-01

    Full Text Available In this paper, a new object tracking system is proposed to improve the object manipulation capabilities of service robots. The goal is to continuously track the state of the visualized environment in order to send visual information in real time to the path planning and decision modules of the robot; that is, to adapt the movement of the robotic system according to the state variations appearing in the imaged scene. The tracking approach is based on a probabilistic collaborative tracking framework developed around a 2D patch‐based tracking system and a 2D‐3D point features tracker. The real‐time visual information is composed of RGB‐D data streams acquired from state‐of‐the‐art structured light sensors. For performance evaluation, the accuracy of the developed tracker is compared to a traditional marker‐based tracking system which delivers 3D information with respect to the position of the marker.

  9. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  10. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Directory of Open Access Journals (Sweden)

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  11. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    Science.gov (United States)

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  12. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  13. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    Directory of Open Access Journals (Sweden)

    Sungdae Sim

    2012-12-01

    Full Text Available Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  14. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    Science.gov (United States)

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  15. A methodology for 3D modeling and visualization of geological objects

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to organize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and it is a good way to solve the problem of integration and share of geological spatial data.

  16. A methodology for 3D modeling and visualization of geological objects

    Institute of Scientific and Technical Information of China (English)

    ZHANG LiQiang; TAN YuMin; KANG ZhiZhong; RUI XiaoPing; ZHAO YuanYuan; LIU Liu

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to or- ganize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and It is a good way to solve the problem of integration and share of geological spatial data.

  17. Object Extraction from Architecture Scenes through 3D Local Scanned Data Analysis

    Directory of Open Access Journals (Sweden)

    NING, X.

    2012-08-01

    Full Text Available Terrestrial laser scanning becomes a standard way for acquiring 3D data of complex outdoor objects. The processing of huge number of points and recognition of different objects inside become a new challenge, especially in the case where objects are included. In this paper, a new approach is proposed to classify objects through an analysis on shape information of the point cloud data. The scanned scene is constructed using k Nearest Neighboring (k-NN, and then similarity measurement between points is defined to cluster points with similar primitive shapes. Moreover, we introduce a combined geometrical criterion to refine the over-segmented results. To achieve more detail information, a residual based segmentation is adopted to refine the segmentation of architectural objects into more parts with different shape properties. Experimental results demonstrate that this approach can be used as a robust way to extract different objects in the scenes.

  18. Breaking the Crowther limit: Combining depth-sectioning and tilt tomography for high-resolution, wide-field 3D reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Hovden, Robert, E-mail: rmh244@cornell.edu [School of Applied and Engineering Physics and Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY 14853 (United States); Ercius, Peter [National Center for Electron Microscopy, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Jiang, Yi [Department of Physics, Cornell University, Ithaca, NY 14853 (United States); Wang, Deli; Yu, Yingchao; Abruña, Héctor D. [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, NY 14853 (United States); Elser, Veit [Department of Physics, Cornell University, Ithaca, NY 14853 (United States); Muller, David A. [School of Applied and Engineering Physics and Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY 14853 (United States)

    2014-05-01

    To date, high-resolution (<1 nm) imaging of extended objects in three-dimensions (3D) has not been possible. A restriction known as the Crowther criterion forces a tradeoff between object size and resolution for 3D reconstructions by tomography. Further, the sub-Angstrom resolution of aberration-corrected electron microscopes is accompanied by a greatly diminished depth of field, causing regions of larger specimens (>6 nm) to appear blurred or missing. Here we demonstrate a three-dimensional imaging method that overcomes both these limits by combining through-focal depth sectioning and traditional tilt-series tomography to reconstruct extended objects, with high-resolution, in all three dimensions. The large convergence angle in aberration corrected instruments now becomes a benefit and not a hindrance to higher quality reconstructions. A through-focal reconstruction over a 390 nm 3D carbon support containing over 100 dealloyed and nanoporous PtCu catalyst particles revealed with sub-nanometer detail the extensive and connected interior pore structure that is created by the dealloying instability. - Highlights: • Develop tomography technique for high-resolution and large field of view. • We combine depth sectioning with traditional tilt tomography. • Through-focal tomography reduces tilts and improves resolution. • Through-focal tomography overcomes the fundamental Crowther limit. • Aberration-corrected becomes a benefit and not a hindrance for tomography.

  19. A joint multi-view plus depth image coding scheme based on 3D-warping

    DEFF Research Database (Denmark)

    Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone

    2011-01-01

    Free viewpoint video applications and autostereoscopic displays require the transmission of multiple views of a scene together with depth maps. Current compression and transmission solutions just handle these two data streams as separate entities. However, depth maps contain key information on th...

  20. Indoor 3D Video Monitoring Using Multiple Kinect Depth-Cameras

    Directory of Open Access Journals (Sweden)

    M. Martínez-Zarzuela

    2014-02-01

    Full Text Available This article describes the design and development of a system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in selfcoding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to merge the skeletons detected locally by each Kinect conveniently, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  1. Distributed System for 3D Remote Monitoring Using KINECT Depth Cameras

    Directory of Open Access Journals (Sweden)

    M. Martinez-Zarzuela

    2014-01-01

    Full Text Available This article describes the design and development ofa system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in self-coding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to conveniently merge the skeletons detected locally by each Kinect, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  2. 3D Gray Radiative Properties of Accretion Shocks in Young Stellar Objects

    Science.gov (United States)

    Ibgui, L.; Orlando, S.; Stehlé, C.; Chièze, J.-P.; Hubeny, I.; Lanz, T.; de Sá, L.; Matsakos, T.; González, M.; Bonito, R.

    2014-01-01

    We address the problem of the contribution of radiation to the structure and dynamics of accretion shocks on Young Stellar Objects. Solving the 3D RTE (radiative transfer equation) under our "gray LTE approach", i.e., using appropriate mean opacities computed in local thermodynamic equilibrium, we post-process the 3D MHD (magnetohydrodynamic) structure of an accretion stream impacting the stellar chromosphere. We find a radiation flux of ten orders of magnitude larger than the accreting energy rate, which is due to a large overestimation of the radiative cooling. A gray LTE radiative transfer approximation is therefore not consistent with the given MHD structure of the shock. Further investigations are required to clarify the role of radiation, by relaxing both the gray and LTE approximations in RHD (radiation hydrodynamics) simulations. Post-processing the obtained structures through the resolution of the non-LTE monochromatic RTE will provide reference radiation quantities against which RHD approximate solutions will be compared.

  3. Localization of significant 3D objects in 2D images for generic vision tasks

    Science.gov (United States)

    Mokhtari, Marielle; Bergevin, Robert

    1995-10-01

    Computer vision experiments are not very often linked to practical applications but rather deal with typical laboratory experiments under controlled conditions. For instance, most object recognition experiments are based on specific models used under limitative constraints. Our work proposes a general framework for rapidly locating significant 3D objects in 2D static images of medium to high complexity, as a prerequisite step to recognition and interpretation when no a priori knowledge of the contents of the scene is assumed. In this paper, a definition of generic objects is proposed, covering the structures that are implied in the image. Under this framework, it must be possible to locate generic objects and assign a significance figure to each one from any image fed to the system. The most significant structure in a given image becomes the focus of interest of the system determining subsequent tasks (like subsequent robot moves, image acquisitions and processing). A survey of existing strategies for locating 3D objects in 2D images is first presented and our approach is defined relative to these strategies. Perceptual grouping paradigms leading to the structural organization of the components of an image are at the core of our approach.

  4. Enhanced Visual-Attention Model for Perceptually Improved 3D Object Modeling in Virtual Environments

    Science.gov (United States)

    Chagnon-Forget, Maude; Rouhafzay, Ghazal; Cretu, Ana-Maria; Bouchard, Stéphane

    2016-12-01

    Three-dimensional object modeling and interactive virtual environment applications require accurate, but compact object models that ensure real-time rendering capabilities. In this context, the paper proposes a 3D modeling framework employing visual attention characteristics in order to obtain compact models that are more adapted to human visual capabilities. An enhanced computational visual attention model with additional saliency channels, such as curvature, symmetry, contrast and entropy, is initially employed to detect points of interest over the surface of a 3D object. The impact of the use of these supplementary channels is experimentally evaluated. The regions identified as salient by the visual attention model are preserved in a selectively-simplified model obtained using an adapted version of the QSlim algorithm. The resulting model is characterized by a higher density of points in the salient regions, therefore ensuring a higher perceived quality, while at the same time ensuring a less complex and more compact representation for the object. The quality of the resulting models is compared with the performance of other interest point detectors incorporated in a similar manner in the simplification algorithm. The proposed solution results overall in higher quality models, especially at lower resolutions. As an example of application, the selectively-densified models are included in a continuous multiple level of detail (LOD) modeling framework, in which an original neural-network solution selects the appropriate size and resolution of an object.

  5. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    Science.gov (United States)

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  6. Binocular and monocular depth cues in online feedback control of 3D pointing movement.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2011-06-30

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size, and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions.

  7. Study of improved ray tracing parallel algorithm for CGH of 3D objects on GPU

    Science.gov (United States)

    Cong, Bin; Jiang, Xiaoyu; Yao, Jun; Zhao, Kai

    2014-11-01

    An improved parallel algorithm for holograms of three-dimensional objects was presented. According to the physical characteristics and mathematical properties of the original ray tracing algorithm for computer generated holograms (CGH), using transform approximation and numerical analysis methods, we extract parts of ray tracing algorithm which satisfy parallelization features and implement them on graphics processing unit (GPU). Meanwhile, through proper design of parallel numerical procedure, we did parallel programming to the two-dimensional slices of three-dimensional object with CUDA. According to the experiments, an effective method of dealing with occlusion problem in ray tracing is proposed, as well as generating the holograms of 3D objects with additive property. Our results indicate that the improved algorithm can effectively shorten the computing time. Due to the different sizes of spatial object points and hologram pixels, the speed has increased 20 to 70 times comparing with original ray tracing algorithm.

  8. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  9. Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues

    Science.gov (United States)

    2014-10-28

    participants (NVIDIA Personal GeForce 3D Vision Active Shutter Glasses, and Samsung SyncMaster 2233RZ). This display was a 22-inch diagonal LCD display with...The display was a 22-inch diagonal 120Hz LCD, with a resolution of 1680 x 1050. Image adapted from Samsung Syncmaster and NVidia GeForce...camera separation on the viewing experience of stereoscopic images.” Journal of Electronic Imaging, 21(1), p. 011011-1. Landers, D. D., & Cormack, L

  10. 3D Object Recognition of a Robotic Navigation Aid for the Visually Impaired.

    Science.gov (United States)

    Ye, Cang; Qian, Xiangfei

    2017-09-01

    This paper presents a 3D object recognition method and its implementation on a Robotic Navigation Aid (RNA) to allow real-time detection of indoor structural objects for the navigation of a blind person. The method segments a point cloud into numerous planar patches and extracts their Inter-Plane Relationships (IPRs). Based on the existing IPRs of the object models, the method defines 6 High Level Features (HLFs) and determines the HLFs for each patch. A Gaussian-Mixture-Model-based plane classifier is then devised to classify each planar patch into one belonging to a particular object model. Finally, a recursive plane clustering procedure is used to cluster the classified planes into the model objects. As the proposed method uses geometric context to detect an object, it is robust to the object's visual appearance change. As a result, it is ideal for detecting structural objects (e.g., stairways, doorways, etc.). In addition, it has high scalability and parallelism. The method is also capable of detecting some indoor non-structural objects. Experimental results demonstrate that the proposed method has a high success rate in object recognition.

  11. Superquadric Based Hierarchical Reconstruction for Virtualizing Free Form Objects from 3D Data

    Institute of Scientific and Technical Information of China (English)

    LIU Weibin; YUAN Baozong

    2001-01-01

    The superquadric description is usedin modeling the virtual objects in AVR (from ActualReality to Virtual Reality).However,due to the in-trinsic property,the superquadric and its deforma-tion extensions (DSQ) are not flexible enough to de-scribe precisely the complex objects with asymmetryand free form surface.To solve the problem,a hierar-chical reconstruction approach in AVR for virtualizingthe objects with superquadric based models from 3Ddata is developed.Firstly,an initial approximation isproduced by a superquadric fit to the 3D data.Then,the crude superquadric fit is refined by fitting theresidue (distance map) with global and local DirectManipulation of Free-Form Deformation (DMFFD).The key elements of the hierarchical method,includ-ing superquadric fit to 3D data,mathematical detailsand the recursive-fitting algorithm for DMFFD,com-putation of distance maps,adaptive refinement anddecimation of polygon mesh under DMFFD,are pro-posed.An implementation example of hierarchicalreconstruction is presented.The proposed approachis shown competent and efficient for virtualizing thecomplex objects into virtual environment.

  12. Non-destructive 3D shape measurement of transparent and black objects with thermal fringes

    Science.gov (United States)

    Brahm, Anika; Rößler, Conrad; Dietrich, Patrick; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2016-05-01

    Fringe projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. Typically, fringe sequences in the visible wavelength range (VIS) are projected onto the surfaces of objects to be measured and are observed by two cameras in a stereo vision setup. The reconstruction is done by finding corresponding pixels in both cameras followed by triangulation. Problems can occur if the properties of some materials disturb the measurements. If the objects are transparent, translucent, reflective, or strongly absorbing in the VIS range, the projected patterns cannot be recorded properly. To overcome these challenges, we present a new alternative approach in the infrared (IR) region of the electromagnetic spectrum. For this purpose, two long-wavelength infrared (LWIR) cameras (7.5 - 13 μm) are used to detect the emitted heat radiation from surfaces which is induced by a pattern projection unit driven by a CO2 laser (10.6 μm). Thus, materials like glass or black objects, e.g. carbon fiber materials, can be measured non-destructively without the need of any additional paintings. We will demonstrate the basic principles of this heat pattern approach and show two types of 3D systems based on a freeform mirror and a GOBO wheel (GOes Before Optics) projector unit.

  13. Recognition of 3-D objects based on Markov random field models

    Institute of Scientific and Technical Information of China (English)

    HUANG Ying; DING Xiao-qing; WANG Sheng-jin

    2006-01-01

    The recognition of 3-D objects is quite a difficult task for computer vision systems.This paper presents a new object framework,which utilizes densely sampled grids with different resolutions to represent the local information of the input image.A Markov random field model is then created to model the geometric distribution of the object key nodes.Flexible matching,which aims to find the accurate correspondence map between the key points of two images,is performed by combining the local similarities and the geometric relations together using the highest confidence first method.Afterwards,a global similarity is calculated for object recognition. Experimental results on Coil-100 object database,which consists of 7 200 images of 100 objects,are presented.When the numbers of templates vary from 4,8,18 to 36 for each object,and the remaining images compose the test sets,the object recognition rates are 95.75 %,99.30 %,100.0 % and 100.0 %,respectively.The excellent recognition performance is much better than those of the other cited references,which indicates that our approach is well-suited for appearance-based object recognition.

  14. Stereoscopic Depth Contrast in a 3D Müller-Lyer Configuration: Evidence for Local Normalization.

    Science.gov (United States)

    Harada, Shinya; Mitsudo, Hiroyuki

    2017-01-01

    Depth contrast is a stereoscopic visual phenomenon in which the slant of an element is affected by that of adjacent elements. Normalization has been proposed to be a possible cause of depth contrast, but it is still unclear how depth contrast involves normalization. To address this issue, we devised stereograms consisting of a vertical test line accompanied by several inducer lines, like a three-dimensional variation of the well-known Müller-Lyer configuration. The inducer lines had horizontal binocular disparities that defined a stereoscopic slant about a horizontal axis with respect to the endpoints of the test line. The observer's task was to adjust the slant of the test line about a horizontal axis until it appeared subjectively vertical. The results of two psychophysical experiments found that slant settings were affected by the slant of local inducers, but not by the overall slant of the whole stimulus. These results suggest that, at least for line patterns, the stereo system normalizes depth locally.

  15. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    Science.gov (United States)

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  16. Polarizablity of 2D and 3D conducting objects using method of moments

    CERN Document Server

    Shahpari, Morteza; Lewis, Andrew

    2014-01-01

    Fundamental antenna limits of the gain-bandwidth product are derived from polarizability calculations. This electrostatic technique has significant value in many antenna evaluations. Polarizability is not available in closed form for most antenna shapes and no commercial electromagnetic packages have this facility. Numerical computation of the polarizability for arbitrary conducting bodies was undertaken using an unstructured triangular mesh over the surface of 2D and 3D objects. Numerical results compare favourably with analytical solutions and can be implemented efficiently for large structures of arbitrary shape.

  17. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    Science.gov (United States)

    Zhao, Xianling; Liu, Jiansheng; Zhang, Huayu; Wu, Yingchun

    2015-12-01

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000.

  18. 3D Skeleton model derived from Kinect Depth Sensor Camera and its application to walking style quality evaluations

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-07-01

    Full Text Available Feature extraction for gait recognition has been created widely. The ancestor for this task is divided into two parts, model based and free-model based. Model-based approaches obtain a set of static or dynamic skeleton parameters via modeling or tracking body components such as limbs, legs, arms and thighs. Model-free approaches focus on shapes of silhouettes or the entire movement of physical bodies. Model-free approaches are insensitive to the quality of silhouettes. Its advantage is a low computational costs comparing to model-based approaches. However, they are usually not robust to viewpoints and scale. Imaging technology also developed quickly this decades. Motion capture (mocap device integrated with motion sensor has an expensive price and can only be owned by big animation studio. Fortunately now already existed Kinect camera equipped with depth sensor image in the market with very low price compare to any mocap device. Of course the accuracy not as good as the expensive one, but using some preprocessing we can remove the jittery and noisy in the 3D skeleton points. Our proposed method is part of model based feature extraction and we call it 3D Skeleton model. 3D skeleton model for extracting gait itself is a new model style considering all the previous model is using 2D skeleton model. The advantages itself is getting accurate coordinate of 3D point for each skeleton model rather than only 2D point. We use Kinect to get the depth data. We use Ipisoft mocap software to extract 3d skeleton model from Kinect video. From the experimental results shows 86.36% correctly classified instances using SVM.

  19. Depth perception and defensive system activation in a 3-D environment

    Directory of Open Access Journals (Sweden)

    Emmanuelle eCombe

    2011-08-01

    Full Text Available To survive, animals must be able to react appropriately (in temporal and behavioral terms when facing a threat. One of the essential parameters considered by the defensive system is the distance of the threat, the defensive distance. In this study, we investigate the visual depth cues that could be considered as an alarm cue for the activation of the defensive system. For this purpose, we performed an active-escape pain task in a virtual three-dimensional environment. In two experiments, we manipulated the nature and consistency of different depth cues: vergence, linear perspective, and angular size. By measuring skin conductance responses, we characterized the situations that activated the defensive system. We show that the angular size of the predator was sufficient information to trigger responses from the defensive system, but we also demonstrate that vergence, which can delay the emotional response in inconsistent situations, is also a highly reliable cue for the activation of the defensive system.

  20. Lapse-time-dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    Science.gov (United States)

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-10-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: first, we evaluate the contribution of surface- and body-wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time-dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Second, we compare the lapse-time behaviour in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  1. Lapse-time dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    Science.gov (United States)

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-07-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: firstly, we evaluate the contribution of surface and body wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Secondly, we compare the lapse-time behavior in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  2. Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens.

    Science.gov (United States)

    Wang, Yu-Jen; Shen, Xin; Lin, Yi-Hsin; Javidi, Bahram

    2015-08-01

    Conventional synthetic-aperture integral imaging uses a lens array to sense the three-dimensional (3D) object or scene that can then be reconstructed digitally or optically. However, integral imaging generally suffers from a fixed and limited range of depth of field (DOF). In this Letter, we experimentally demonstrate a 3D integral-imaging endoscopy with tunable DOF by using a single large-aperture focal-length-tunable liquid crystal (LC) lens. The proposed system can provide high spatial resolution and an extended DOF in synthetic-aperture integral imaging 3D endoscope. In our experiments, the image plane in the integral imaging pickup process can be tuned from 18 to 38 mm continuously using a large-aperture LC lens, and the total DOF is extended from 12 to 51 mm. To the best of our knowledge, this is the first report on synthetic aperture integral imaging 3D endoscopy with a large-aperture LC lens that can provide high spatial resolution 3D imaging with an extend DOF.

  3. Thickness and clearance visualization based on distance field of 3D objects

    Directory of Open Access Journals (Sweden)

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  4. Recognizing objects in 3D point clouds with multi-scale local features.

    Science.gov (United States)

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-12-15

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  5. 3-D Parallel, Object-Oriented, Hybrid, PIC Code for Ion Ring Studies

    Science.gov (United States)

    Omelchenko, Y. A.

    1997-08-01

    The 3-D hybrid, Particle-in-Cell (PIC) code, FLAME has been developed to study low-frequency, large orbit plasmas in realistic cylindrical configurations. FLAME assumes plasma quasineutrality and solves the Maxwell equations with displacement current neglected. The electron component is modeled as a massless fluid and all ion components are represented by discrete macro-particles. The poloidal discretization is done by a finite-difference staggered grid method. FFT is applied in the azimuthal direction. A substantial reduction of CPU time is achieved by enabling separate time advances of background and beam particle species in the time-averaged fields. The FLAME structure follows the guidelines of object-oriented programming. Its C++ class hierarchy comprises the Utility, Geometry, Particle, Grid and Distributed base class packages. The latter encapsulates implementation of concurrent grid and particle algorithms. The particle and grid data interprocessor communications are unified and designed to be independent of both the underlying message-passing library and the actual poloidal domain decomposition technique (FFT's are local). Load balancing concerns are addressed by using adaptive domain partitions to account for nonuniform spatial distributions of particle objects. The results of 2-D and 3-D FLAME simulations in support of the FIREX program at Cornell are presented.

  6. Enhanced 3D prestack depth imaging of broadband data from the South China Sea: a case study

    Science.gov (United States)

    Zhang, Hao; Xu, Jincheng; Li, Jinbo

    2016-08-01

    We present a case study of prestack depth imaging for data from the South China Sea using an enhanced work flow with cutting edge technologies. In the survey area, the presence of complex geologies such as carbonate pinnacles and gas pockets creates challenges for processing and imaging: the complex geometry of carbonates exhibits 3D effect for wave propagation; deriving velocity inside carbonates and gas pockets is difficult and laborious; and localised strong attenuation effect from gas pockets may lead to absorption and dispersion problems. In the course of developing the enhanced work flow to tackle these issues, the following processing steps have the most significant impact on improving the imaging quality: (1) 3D ghost wavefield attenuation, in particular to remove the ghost energy associated with complex structures; (2) 3D surface-related multiple elimination (SRME) to remove multiples, in particular multiples related to complex carbonate structures; (3) full waveform inversion (FWI) and tomography-based velocity model building, to derive a geologically plausible velocity model for imaging; (4) Q-tomography to estimate the Q model which describes the intrinsic attenuation of the subsurface media; (5) de-absorption prestack depth migration (Q-PSDM) to compensate the earth absorption and dispersion effect during imaging especially for the area below gas pockets. The case study with the data from the South China Sea shows that the enhanced work flow consisting of cutting edge technologies is effective when the complex geologies are present.

  7. Fully integrated system-on-chip for pixel-based 3D depth and scene mapping

    Science.gov (United States)

    Popp, Martin; De Coi, Beat; Thalmann, Markus; Gancarz, Radoslav; Ferrat, Pascal; Dürmüller, Martin; Britt, Florian; Annese, Marco; Ledergerber, Markus; Catregn, Gion-Pol

    2012-03-01

    We present for the first time a fully integrated system-on-chip (SoC) for pixel-based 3D range detection suited for commercial applications. It is based on the time-of-flight (ToF) principle, i.e. measuring the phase difference of a reflected pulse train. The product epc600 is fabricated using a dedicated process flow, called Espros Photonic CMOS. This integration makes it possible to achieve a Quantum Efficiency (QE) of >80% in the full wavelength band from 520nm up to 900nm as well as very high timing precision in the sub-ns range which is needed for exact detection of the phase delay. The SoC features 8x8 pixels and includes all necessary sub-components such as ToF pixel array, voltage generation and regulation, non-volatile memory for configuration, LED driver for active illumination, digital SPI interface for easy communication, column based 12bit ADC converters, PLL and digital data processing with temporary data storage. The system can be operated at up to 100 frames per second.

  8. ROOT OO model to render multi-level 3-D geometrical objects via an OpenGL

    Science.gov (United States)

    Brun, Rene; Fine, Valeri; Rademakers, Fons

    2001-08-01

    This paper presents a set of C++ low-level classes to render 3D objects within ROOT-based frameworks. This allows developing a set of viewers with different properties the user can choose from to render one and the same 3D objects.

  9. Active learning in the lecture theatre using 3D printed objects [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    David P. Smith

    2016-06-01

    Full Text Available The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student.

  10. Laser Scanning for 3D Object Characterization: Infrastructure for Exploration and Analysis of Vegetation Signatures

    Science.gov (United States)

    Koenig, K.; Höfle, B.

    2012-04-01

    Mapping and characterization of the three-dimensional nature of vegetation is increasingly gaining in importance. Deeper insight is required for e.g. forest management, biodiversity assessment, habitat analysis, precision agriculture, renewable energy production or the analysis of interaction between biosphere and atmosphere. However the potential of 3D vegetation characterization has not been exploited so far and new technologies are needed. Laser scanning has evolved into the state-of-the-art technology for highly accurate 3D data acquisition. By now several studies indicated a high value of 3D vegetation description by using laser data. The laser sensors provide a detailed geometric presentation (geometric information) of scanned objects as well as a full profile of laser energy that was scattered back to the sensor (radiometric information). In order to exploit the full potential of these datasets, profound knowledge on laser scanning technology for data acquisition, geoinformation technology for data analysis and object of interest (e.g. vegetation) for data interpretation have to be joined. A signature database is a collection of signatures of reference vegetation objects acquired under known conditions and sensor parameters and can be used to improve information extraction from unclassified vegetation datasets. Different vegetation elements (leaves, branches, etc.) at different heights above ground with different geometric composition contribute to the overall description (i.e. signature) of the scanned object. The developed tools allow analyzing tree objects according to single features (e.g. echo width and signal amplitude) and to any relation of features and derived statistical values (e.g. ratio of laser point attributes). For example, a single backscatter cross section value does not allow for tree species determination, whereas the average echo width per tree segment can give good estimates. Statistical values and/or distributions (e.g. Gaussian

  11. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    Directory of Open Access Journals (Sweden)

    Feng Xu

    Full Text Available Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies.

  12. Depth-resolved 3D visualization of coronary microvasculature with optical microangiography

    Science.gov (United States)

    Qin, Wan; Roberts, Meredith A.; Qi, Xiaoli; Murry, Charles E.; Zheng, Ying; Wang, Ruikang K.

    2016-11-01

    In this study, we propose a novel implementation of optical coherence tomography-based angiography combined with ex vivo perfusion of fixed hearts to visualize coronary microvascular structure and function. The extracorporeal perfusion of Intralipid solution allows depth-resolved angiographic imaging, control of perfusion pressure, and high-resolution optical microangiography. The imaging technique offers new opportunities for microcirculation research in the heart, which has been challenging due to motion artifacts and the lack of independent control of pressure and flow. With the ability to precisely quantify structural and functional features, this imaging platform has broad potential for the study of the pathophysiology of microvasculature in the heart as well as other organs.

  13. A 3D interactive multi-object segmentation tool using local robust statistics driven active contours.

    Science.gov (United States)

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-08-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: first, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction-this not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we provide

  14. Electromagnetic 3D subsurface imaging with source sparsity for a synthetic object

    CERN Document Server

    Pursiainen, Sampsa

    2016-01-01

    This paper concerns electromagnetic 3D subsurface imaging in connection with sparsity of signal sources. We explored an imaging approach that can be implemented in situations that allow obtaining a large amount of data over a surface or a set of orbits but at the same time require sparsity of the signal sources. Characteristic to such a tomography scenario is that it necessitates the inversion technique to be genuinely three-dimensional: For example, slicing is not possible due to the low number of sources. Here, we primarily focused on astrophysical subsurface exploration purposes. As an example target of our numerical experiments we used a synthetic small planetary object containing three inclusions, e.g. voids, of the size of the wavelength. A tetrahedral arrangement of source positions was used, it being the simplest symmetric point configuration in 3D. Our results suggest that somewhat reliable inversion results can be produced within the present a priori assumptions, if the data can be recorded at a spe...

  15. 3D Gray Radiative Properties of Accretion Shocks in Young Stellar Objects

    Directory of Open Access Journals (Sweden)

    Ibgui L.

    2014-01-01

    Full Text Available We address the problem of the contribution of radiation to the structure and dynamics of accretion shocks on Young Stellar Objects. Solving the 3D RTE (radiative transfer equation under our “gray LTE approach”, i.e., using appropriate mean opacities computed in local thermodynamic equilibrium, we post-process the 3D MHD (magnetohydrodynamic structure of an accretion stream impacting the stellar chromosphere. We find a radiation flux of ten orders of magnitude larger than the accreting energy rate, which is due to a large overestimation of the radiative cooling. A gray LTE radiative transfer approximation is therefore not consistent with the given MHD structure of the shock. Further investigations are required to clarify the role of radiation, by relaxing both the gray and LTE approximations in RHD (radiation hydrodynamics simulations. Post-processing the obtained structures through the resolution of the non-LTE monochromatic RTE will provide reference radiation quantities against which RHD approximate solutions will be compared.

  16. 3D OBJECT COORDINATES EXTRACTION BY RADARGRAMMETRY AND MULTI STEP IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    A. Eftekhari

    2013-09-01

    Full Text Available Nowadays by high resolution SAR imaging systems as Radarsat-2, TerraSAR-X and COSMO-skyMed, three-dimensional terrain data extraction using SAR images is growing. InSAR and Radargrammetry are two most common approaches for removing 3D object coordinate from SAR images. Research has shown that extraction of terrain elevation data using satellite repeat pass interferometry SAR technique due to atmospheric factors and the lack of coherence between the images in areas with dense vegetation cover is a problematic. So the use of Radargrammetry technique can be effective. Generally height derived method by Radargrammetry consists of two stages: Images matching and space intersection. In this paper we propose a multi-stage algorithm founded on the combination of feature based and area based image matching. Then the RPCs that calculate for each images use for extracting 3D coordinate in matched points. At the end, the coordinates calculating that compare with coordinates extracted from 1 meters DEM. The results show root mean square errors for 360 points are 3.09 meters. We use a pair of spotlight TerraSAR-X images from JAM (IRAN in this article.

  17. Simulating hydroplaning of submarine landslides by quasi 3D depth averaged finite element method

    Science.gov (United States)

    De Blasio, Fabio; Battista Crosta, Giovanni

    2014-05-01

    G.B. Crosta, H. J. Chen, and F.V. De Blasio Dept. Of Earth and Environmental Sciences, Università degli Studi di Milano Bicocca, Milano, Italy Klohn Crippen Berger, Calgary, Canada Subaqueous debris flows/submarine landslides, both in the open ocean as well as in fresh waters, exhibit extremely high mobility, quantified by a ratio between vertical to horizontal displacement of the order 0.01 or even much less. It is possible to simulate subaqueous debris flows with small-scale experiments along a flume or a pool using a cohesive mixture of clay and sand. The results have shown a strong enhancement of runout and velocity compared to the case in which the same debris flow travels without water, and have indicated hydroplaning as a possible explanation (Mohrig et al. 1998). Hydroplaning is started when the snout of the debris flow travels sufficiently fast. This generates lift forces on the front of the debris flow exceeding the self-weight of the sediment, which so begins to travel detached from the bed, literally hovering instead of flowing. Clearly, the resistance to flow plummets because drag stress against water is much smaller than the shear strength of the material. The consequence is a dramatic increase of the debris flow speed and runout. Does the process occur also for subaqueous landslides and debris flows in the ocean, something twelve orders of magnitude larger than the experimental ones? Obviously, no experiment will ever be capable to replicate this size, one needs to rely on numerical simulations. Results extending a depth-integrated numerical model for debris flows (Imran et al., 2001) indicate that hydroplaning is possible (De Blasio et al., 2004), but more should be done especially with alternative numerical methodologies. In this work, finite element methods are used to simulate hydroplaning using the code MADflow (Chen, 2014) adopting a depth averaged solution. We ran some simulations on the small scale of the laboratory experiments, and secondly

  18. An overview of 3D topology for LADM-based objects

    NARCIS (Netherlands)

    Zulkifli, N.A.; Rahman, A.A.; Van Oosterom, P.J.M.

    2015-01-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological mod

  19. Model-based optical metrology and visualization of 3-D complex objects

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-li; LI A-meng; ZHAO Xiao-bo; GAO Peng-dong; TIAN Jin-dong; PENG Xiang

    2007-01-01

    This letter addresses several key issues in the process of model-based optical metrology, including three dimensional (3D) sensing, calibration, registration and fusion of range images, geometric representation, and visualization of reconstructed 3D model by taking into account the shape measurement of 3D complex structures,and some experimental results are presented.

  20. Research progress of depth detection in vision measurement: a novel project of bifocal imaging system for 3D measurement

    Science.gov (United States)

    Li, Anhu; Ding, Ye; Wang, Wei; Zhu, Yongjian; Li, Zhizhong

    2013-09-01

    The paper reviews the recent research progresses of vision measurement. The general methods of the depth detection used in the monocular stereo vision are compared with each other. As a result, a novel bifocal imaging measurement system based on the zoom method is proposed to solve the problem of the online 3D measurement. This system consists of a primary lens and a secondary one with the different focal length matching to meet the large-range and high-resolution imaging requirements without time delay and imaging errors, which has an important significance for the industry application.

  1. WAVES GENERATED BY A 3D MOVING BODY IN A TWO-LAYER FLUID OF FINITE DEPTH

    Institute of Scientific and Technical Information of China (English)

    ZHU Wei; YOU Yun-xiang; MIAO Guo-ping; ZHAO Feng; ZHANG Jun

    2005-01-01

    This paper is concerned with the waves generated by a 3-D body advancing beneath the free surface with constant speed in a two-layer fluid of finite depth. By applying Green's theorem, a layered integral equation system based on the Rankine source for the perturbed velocity potential generated by the moving body was derived with the potential flow theory. A four-node isoparametric element method was used to treat with the solution of the layered integral equation system. The surface and interface waves generated by a moving ball were calculated numerically. The results were compared with the analytical results for a moving source with constant velocity.

  2. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis

    Science.gov (United States)

    Jadin, Kyle D.; Wong, Benjamin L.; Bae, Won C.; Li, Kelvin W.; Williamson, Amanda K.; Schumacher, Barbara L.; Price, Jeffrey H.; Sah, Robert L.

    2005-01-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  3. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis

    Science.gov (United States)

    Jadin, Kyle D.; Wong, Benjamin L.; Bae, Won C.; Li, Kelvin W.; Williamson, Amanda K.; Schumacher, Barbara L.; Price, Jeffrey H.; Sah, Robert L.

    2005-01-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  4. Creative Generation of 3D Objects with Deep Learning and Innovation Engines

    DEFF Research Database (Denmark)

    Lehman, Joel Anthony; Risi, Sebastian; Clune, Jeff

    2016-01-01

    Advances in supervised learning with deep neural networks have enabled robust classification in many real world domains. An interesting question is if such advances can also be leveraged effectively for computational creativity. One insight is that because evolutionary algorithms are free from...... strict requirements of mathematical smoothness, they can exploit powerful deep learning representations through arbitrary computational pipelines. In this way, deep networks trained on typical supervised tasks can be used as an ingredient in an evolutionary algorithm driven towards creativity....... To highlight such potential, this paper creates novel 3D objects by leveraging feedback from a deep network trained only to recognize 2D images. This idea is tested by extending previous work with Innovation Engines, i.e. a principled combination of deep learning and evolutionary algorithms for computational...

  5. Creative Generation of 3D Objects with Deep Learning and Innovation Engines

    DEFF Research Database (Denmark)

    Lehman, Joel Anthony; Risi, Sebastian; Clune, Jeff

    2016-01-01

    Advances in supervised learning with deep neural networks have enabled robust classification in many real world domains. An interesting question is if such advances can also be leveraged effectively for computational creativity. One insight is that because evolutionary algorithms are free from...... strict requirements of mathematical smoothness, they can exploit powerful deep learning representations through arbitrary computational pipelines. In this way, deep networks trained on typical supervised tasks can be used as an ingredient in an evolutionary algorithm driven towards creativity....... To highlight such potential, this paper creates novel 3D objects by leveraging feedback from a deep network trained only to recognize 2D images. This idea is tested by extending previous work with Innovation Engines, i.e. a principled combination of deep learning and evolutionary algorithms for computational...

  6. 3D COLOR OBJECTS RECOGNITION SYSTEM USING AN ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Omar BENCHAREF

    2011-06-01

    Full Text Available Hu & Zernike moments have always been used for grey image representation. In this study we have tried to employ them directly for color image description. This would enable us to keep the maximum amount of information given by the image colors. Regarding the classification process we have opted for the neural networks classifier, which enable to implicitly detect complex nonlinear relationships between dependent and independent variables, and to detect all possible interactions between predictor variables, and the availability of multiple training algorithms. In this document, we present a comparative study between different 3D color objects recognition systems. We have used a variety of topologies of Neural Multi-layer Networks (simple, nested and parallel networks, to come up eventually with a suggestion of a multi-Oriented Neural Networks.

  7. Development of a Relap based Nuclear Plant Analyser with 3-D graphics using OpenGL and Object Relap

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    A 3-D Graphic Nuclear Plant Analyzer (NPA) program was developed using GLScene and the TRelap. GLScene is an OpenGL based 3D graphics library for the Delphi object-oriented program language, and it implements the OpenGL functions in forms suitable for programming with Delphi. TRelap is an object wrapper developed by the author to easily implement the Relap5 thermal hydraulic code under object oriented programming environment. The 3-D Graphic NPA was developed to demonstrate the superiority of the object oriented programming approach in developing complex programs

  8. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    Science.gov (United States)

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  9. Extraction and classification of 3D objects from volumetric CT data

    Science.gov (United States)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  10. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    Directory of Open Access Journals (Sweden)

    Zhong Liu

    2017-02-01

    Full Text Available RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy.

  11. Direct local building inundation depth determination in 3-D point clouds generated from user-generated flood images

    Science.gov (United States)

    Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard

    2017-07-01

    In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.

  12. 3D printing cybersecurity: detecting and preventing attacks that seek to weaken a printed object by changing fill level

    Science.gov (United States)

    Straub, Jeremy

    2017-06-01

    Prior work by Zeltmann, et al. has demonstrated the impact of small defects and other irregularities on the structural integrity of 3D printed objects. It posited that such defects could be introduced intentionally. The current work looks at the impact of changing the fill level on object structural integrity. It considers whether the existence of an appropriate level of fill can be determined through visible light imagery-based assessment of a 3D printed object. A technique for assessing the quality and sufficiency of quantity of 3D printed fill material is presented. It is assessed experimentally and results are presented and analyzed.

  13. Dynamic Self-Occlusion Avoidance Approach Based on the Depth Image Sequence of Moving Visual Object

    Directory of Open Access Journals (Sweden)

    Shihui Zhang

    2016-01-01

    Full Text Available How to avoid the self-occlusion of a moving object is a challenging problem. An approach for dynamically avoiding self-occlusion is proposed based on the depth image sequence of moving visual object. Firstly, two adjacent depth images of a moving object are acquired and each pixel’s 3D coordinates in two adjacent depth images are calculated by utilizing antiprojection transformation. On this basis, the best view model is constructed according to the self-occlusion information in the second depth image. Secondly, the Gaussian curvature feature matrix corresponding to each depth image is calculated by using the pixels’ 3D coordinates. Thirdly, based on the characteristic that the Gaussian curvature is the intrinsic invariant of a surface, the object motion estimation is implemented by matching two Gaussian curvature feature matrices and using the coordinates’ changes of the matched 3D points. Finally, combining the best view model and the motion estimation result, the optimization theory is adopted for planning the camera behavior to accomplish dynamic self-occlusion avoidance process. Experimental results demonstrate the proposed approach is feasible and effective.

  14. A modern approach to storing of 3D geometry of objects in machine engineering industry

    Science.gov (United States)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  15. Implementation of multiple 3D scans for error calculation on object digital reconstruction

    Directory of Open Access Journals (Sweden)

    Sidiropoulos Andreas

    2017-01-01

    Full Text Available Laser scanning is a widespread methodology of visualizing the natural environment and the manmade structures that exist in it. Laser scanners accomplish to digitalize our reality by making highly accurate measurements. Using these measurements they create a set of points in 3D space which is called point cloud and depicts an entire area or object or parts of them. Triangulation laser scanners use the triangle theories and they mainly are used to visualize handheld objects at a very close range from them. In many cases, users of such devices take for granted the accuracy specifications provided by laser scanner manufacturers and respective software and for many applications this is enough. In this paper we use point clouds, collected by a triangulation laser scanner under a repetition method, of two cubes that are geometrically similar to each other but differ in material. At first, the data of each repetition are being compared to each other to examine the consistency of the scanner under multiple measurements of the same scene. Then, the reconstruction of the objects‟ geometry is achieved and the results are being compared to the data derived by a digital caliper. The errors of calculated dimensions were estimated by the use of error propagation law.

  16. Laser Fabrication of Affective 3D Objects with 1/f Fluctuation

    Science.gov (United States)

    Maekawa, Katsuhiro; Nishii, Tomohiro; Hayashi, Terutake; Akabane, Hideo; Agu, Masahiro

    The present paper describes the application of Kansei Engineering to the physical design of engineering products as well as its realization by laser sintering. We have investigated the affective information that might be included in three-dimensional objects such as a ceramic bowl for the tea ceremony. First, an X-ray CT apparatus is utilized to retrieve surface data from the teabowl, and then a frequency analysis is carried out after noise has been filtered. The surface fluctuation is characterized by a power spectrum that is in inverse proportion to the wave number f in circumference. Second, we consider how to realize the surface with a 1/f fluctuation on a computer screen using a 3D CAD model. The fluctuation is applied to a reference shape assuming that the outer surface has a spiral flow line on which unevenness is superimposed. Finally, the selective laser sintering method has been applied to the fabrication of 1/f fluctuation objects. Nylon powder is sintered layer by layer using a CO2 laser to form an artificial teabowl with complicated surface contours.

  17. Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D Sensor

    Directory of Open Access Journals (Sweden)

    Yijun Ji

    2017-01-01

    Full Text Available 3D reconstruction based on structured light or laser scan has been widely used in industrial measurement, robot navigation, and virtual reality. However, most modern range sensors fail to scan transparent objects and some other special materials, of which the surface cannot reflect back the accurate depth because of the absorption and refraction of light. In this paper, we fuse the depth and silhouette information from an RGB-D sensor (Kinect v1 to recover the lost surface of transparent objects. Our system is divided into two parts. First, we utilize the zero and wrong depth led by transparent materials from multiple views to search for the 3D region which contains the transparent object. Then, based on shape from silhouette technology, we recover the 3D model by visual hull within these noisy regions. Joint Grabcut segmentation is operated on multiple color images to extract the silhouette. The initial constraint for Grabcut is automatically determined. Experiments validate that our approach can improve the 3D model of transparent object in real-world scene. Our system is time-saving, robust, and without any interactive operation throughout the process.

  18. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  19. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3-D refractive index maps

    CERN Document Server

    Kim, Kyoohyun

    2016-01-01

    Optical trapping can be used to manipulate the three-dimensional (3-D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3-D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and the extensive computations. Here, we achieved the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3-D refractive index (RI) distribution of samples. Engineering the 3-D light field distribution of a trapping beam based on the measured 3-D RI map of samples generates a light mould, which can be used to manipulate colloidal and biological samples which have arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can ...

  20. Software for Building Models of 3D Objects via the Internet

    Science.gov (United States)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  1. 3D Micro-PIXE at atmospheric pressure: A new tool for the investigation of art and archaeological objects

    Energy Technology Data Exchange (ETDEWEB)

    Kanngiesser, Birgit [Institute for Optic and Atomic Physics, Technical University of Berlin, Hardenbergstr. 36, 10623 Berlin (Germany)], E-mail: bk@atom.physik.tu-berlin.de; Karydas, Andreas-Germanos [Institute of Nuclear Physics, NCSR Demokritos, Athens (Greece); Schuetz, Roman [Institute for Optic and Atomic Physics, Technical University of Berlin, Hardenbergstr. 36, 10623 Berlin (Germany); Sokaras, Dimosthenis [Institute of Nuclear Physics, NCSR Demokritos, Athens (Greece); Reiche, Ina; Roehrs, Stefan; Pichon, Laurent; Salomon, Joseph [Centre de Recherche et de Restauration des Musee de France (C2RMF), CNRS UMR 171 and GDR ChimArt 2114 CNRS/French Ministry of Culture, Paris (France)

    2007-11-15

    The paper describes a novel experiment characterized by the development of a confocal geometry in an external Micro-PIXE set-up. The position of X-ray optics in front of the X-ray detector and its proper alignment with respect to the proton micro-beam focus provided the possibility of carrying out 3D Micro-PIXE analysis. As a first application, depth intensity profiles of the major elements that compose the patina layer of a quaternary bronze alloy were measured. A simulation approach of the 3D Micro-PIXE data deduced elemental concentration profiles in rather good agreement with corresponding results obtained by electron probe micro-analysis from a cross-sectioned patina sample. With its non-destructive and depth-resolving properties, as well as its feasibility in atmospheric pressure, 3D Micro-PIXE seems especially suited for investigations in the field of cultural heritage.

  2. Ionized Outflows in 3-D Insights from Herbig-Haro Objects and Applications to Nearby AGN

    Science.gov (United States)

    Cecil, Gerald

    1999-01-01

    HST shows that the gas distributions of these objects are complex and clump at the limit of resolution. HST spectra have lumpy emission-line profiles, indicating unresolved sub-structure. The advantages of 3D over slits on gas so distributed are: robust flux estimates of various dynamical systems projected along lines of sight, sensitivity to fainter spectral lines that are physical diagnostics (reddening-gas density, T, excitation mechanisms, abundances), and improved prospects for recovery of unobserved dimensions of phase-space. These advantages al- low more confident modeling for more profound inquiry into underlying dynamics. The main complication is the effort required to link multi- frequency datasets that optimally track the energy flow through various phases of the ISM. This tedium has limited the number of objects that have been thoroughly analyzed to the a priori most spectacular systems. For HHO'S, proper-motions constrain the ambient B-field, shock velocity, gas abundances, mass-loss rates, source duty-cycle, and tie-ins with molecular flows. If the shock speed, hence ionization fraction, is indeed small then the ionized gas is a significant part of the flow energetics. For AGN'S, nuclear beaming is a source of ionization ambiguity. Establishing the energetics of the outflow is critical to determining how the accretion disk loses its energy. CXO will provide new constraints (especially spectral) on AGN outflows, and STIS UV-spectroscopy is also constraining cloud properties (although limited by extinction). HHO's show some of the things that we will find around AGN'S. I illustrate these points with results from ground-based and HST programs being pursued with collaborators.

  3. Ionized Outflows in 3-D Insights from Herbig-Haro Objects and Applications to Nearby AGN

    Science.gov (United States)

    Cecil, Gerald

    1999-01-01

    HST shows that the gas distributions of these objects are complex and clump at the limit of resolution. HST spectra have lumpy emission-line profiles, indicating unresolved sub-structure. The advantages of 3D over slits on gas so distributed are: robust flux estimates of various dynamical systems projected along lines of sight, sensitivity to fainter spectral lines that are physical diagnostics (reddening-gas density, T, excitation mechanisms, abundances), and improved prospects for recovery of unobserved dimensions of phase-space. These advantages al- low more confident modeling for more profound inquiry into underlying dynamics. The main complication is the effort required to link multi- frequency datasets that optimally track the energy flow through various phases of the ISM. This tedium has limited the number of objects that have been thoroughly analyzed to the a priori most spectacular systems. For HHO'S, proper-motions constrain the ambient B-field, shock velocity, gas abundances, mass-loss rates, source duty-cycle, and tie-ins with molecular flows. If the shock speed, hence ionization fraction, is indeed small then the ionized gas is a significant part of the flow energetics. For AGN'S, nuclear beaming is a source of ionization ambiguity. Establishing the energetics of the outflow is critical to determining how the accretion disk loses its energy. CXO will provide new constraints (especially spectral) on AGN outflows, and STIS UV-spectroscopy is also constraining cloud properties (although limited by extinction). HHO's show some of the things that we will find around AGN'S. I illustrate these points with results from ground-based and HST programs being pursued with collaborators.

  4. Object-centered reference frames in depth as revealed by induced motion.

    Science.gov (United States)

    Léveillé, Jasmin; Myers, Emma; Yazdanbakhsh, Arash

    2014-03-11

    An object-centric reference frame is a spatial representation in which objects or their parts are coded relative to others. The existence of object-centric representations is supported by the phenomenon of induced motion, in which the motion of an inducer frame in a particular direction induces motion in the opposite direction in a target dot. We report on an experiment made with an induced motion display where a degree of slant is imparted to the inducer frame using either perspective or binocular disparity depth cues. Critically, the inducer frame oscillates perpendicularly to the line of sight, rather than moving in depth. Participants matched the perceived induced motion of the target dot in depth using a 3D rotatable rod. Although the frame did not move in depth, we found that subjects perceived the dot as moving in depth, either along the slanted frame or against it, when depth was given by perspective and disparity, respectively. The presence of induced motion is thus not only due to the competition among populations of planar motion filters, but rather incorporates 3D scene constraints. We also discuss this finding in the context of the uncertainty related to various depth cues, and to the locality of representation of reference frames.

  5. 3D Visualization System for Tracking and Identification of Objects Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Photon-X has developed a proprietary EO spatial phase technology that can passively collect 3-D images in real-time using a single camera-based system. This...

  6. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    Science.gov (United States)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  7. Linearized perturbation analysis of along-strike nonuniformity of slip in 3D fault models with depth-variable properties

    Science.gov (United States)

    Liu, Y.; Rice, J. R.

    2004-12-01

    In our three dimensional modeling [EOS, 2003; JGR submitted, 2004] of long term loading and earthquake sequences on a shallow subduction fault, with depth-variable rate and state friction properties, we found the response was perturbed into a strongly nonuniform slip mode along strike by introducing small along-strike perturbations in friction properties. Similar results were found in some cases of 3D strike slip modeling by Rice and Ben-Zion [PNAS, 1996]. To explore this further, we report results of linearized perturbation analyses for two versions, ``ageing'' (or ``slowness'') and ``slip'', of the friction laws. The 3D solution vector S(x,z,t), where x,z are the respective along-strike and downdip coordinates in the fault plane, consists of shear stress τ (x,z,t), slip δ (x,z,t) and state variable θ (x,z,t). It can be written as the sum of a 2D solution vector S0(z,t), which is subject to initial conditions S0(z,0), and an infinitesimal variation Re[S1(z,t) exp(2 i π x / λ )], where λ is a perturbation wavelength. In our case the friction properties and external driving are such that S0(z,t) describes a sequence of earthquakes separated by long interseismic loading intervals during which slow creep slippage occurs, like in the Tse and Rice [JGR, 1986] type of 2D modeling. Linearizing the governing equations in S1(z,t) (giving a nonautonomous system, because coefficients depend on S0(z,t)), we can calculate the evolution of S1 for a given unperturbed history S0(z,t) and initial conditions S1(z,0). For both pure thrust and pure strike-slip fault geometries, we found that there is a critical ratio λ crit/h*, which seems to determine the stability of along-strike response; h* is the minimum neutrally stable downdip slip patch size, according to rate and state stability theory for perturbation of steady slip. When λ crit/h* is greater than the critical value, ∂ δ 1(z,t)/∂ t and θ 1(z,t) grow to significantly large values; when less than the critical

  8. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  9. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  10. Estimating object depth using a vertical gradient metal detector

    Science.gov (United States)

    Marble, Jay; McMichael, Ian; Reidy, Denis

    2008-04-01

    Object depth is a simple characteristic that can indicate an object's type. Popular instruments like radar, metal detectors, and magnetometers are often used to detect the presence of a subsurface object. The next question is often, "How deep is it?" Determining the answer, however, is not as straight forward as might be expected. This paper explores the determination of depth using metal detectors. More specifically, it looks at a popular metal detector (the Geonics EM61) and makes use of its vertically separated coils to generate a depth estimate. Estimated depths are shown for UXO and small surface clutter from flush buried down to 48". Ultimately a statistical depth resolution is determined. An alternative approach is then considered that casts the depth determination problem as one of classification. Only two classes are considered important "deep" and "shallow". Results are shown that illustrate the utility of the classifier approach. The traditional estimator can provide a depth estimate of the object, but the classifier approach can distinguish between small shallow, large deep, and large shallow object classes.

  11. Influence of the measurement object's reflective properties on the accuracy of array projection-based 3D sensors

    Science.gov (United States)

    Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    In order to increase the measurement speed of pattern projection-based three-dimensional (3-D) sensors, in 2014, we introduced the so-called array projector which allows pattern projection at several 1,000 fps. As the patterns are switched by switching on and off the light sources of multiple slide projectors, each pattern originates from a different projection center. This may lead to a 3-D point deviation when measuring glossy objects. In this contribution, we theoretically and experimentally investigate the dependence of this deviation on the measurement object's reflective properties. Furthermore, we propose a procedure for compensating for this deviation.

  12. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  13. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  14. Synthesis of computer-generated spherical hologram of real object with 360° field of view using a depth camera.

    Science.gov (United States)

    Li, Gang; Phan, Anh-Hoang; Kim, Nam; Park, Jae-Hyeung

    2013-05-20

    A method for synthesizing a 360° computer-generated spherical hologram of real-existing objects is proposed. The whole three-dimensional (3-D) information of a real object is extracted by using a depth camera to capture multiple sides of the object. The point cloud sets which are obtained from corresponding sides of the object surface are brought into a common coordinate system by point cloud registration process. The modeled 3-D point cloud is then processed by hidden point removal method in order to identify visible point set for each spherical hologram point. The hologram on the spherical surface is finally synthesized by accumulating spherical waves from visible object points. By reconstructing partial region of the calculated spherical hologram, the corresponding view of the 3-D real object is obtained. The principle is verified via optical capturing using a depth camera and numerical reconstructions.

  15. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    Directory of Open Access Journals (Sweden)

    Javier Contreras

    2015-11-01

    Full Text Available A MATLAB/SIMULINK software simulation model (structure and component blocks has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  16. Influence of limited random-phase of objects on the image quality of 3D holographic display

    Science.gov (United States)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  17. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    Directory of Open Access Journals (Sweden)

    Maurizio Muzzupappa

    2013-08-01

    Full Text Available In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.

  18. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    Science.gov (United States)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  19. Eccentricity in Images of Circular and Spherical Targets and its Impact to 3D Object Reconstruction

    Science.gov (United States)

    Luhmann, T.

    2014-06-01

    This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.

  20. A Parallel Product-Convolution approach for representing the depth varying Point Spread Functions in 3D widefield microscopy based on principal component analysis.

    Science.gov (United States)

    Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A

    2010-03-29

    We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.

  1. Modeling 3D Unknown object by Range Finder and Video Camera and Updating of a 3D Database by a Single Camera View

    National Research Council Canada - National Science Library

    Nzie, C; Triboulet, J; Mallem, Malik; Chavand, F

    2005-01-01

    The device consists of a camera which gives the HO an indirect view of a scene (real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment...

  2. Tailoring bulk mechanical properties of 3D printed objects of polylactic acid varying internal micro-architecture

    Science.gov (United States)

    Malinauskas, Mangirdas; Skliutas, Edvinas; Jonušauskas, Linas; Mizeras, Deividas; Šešok, Andžela; Piskarskas, Algis

    2015-05-01

    Herein we present 3D Printing (3DP) fabrication of structures having internal microarchitecture and characterization of their mechanical properties. Depending on the material, geometry and fill factor, the manufactured objects mechanical performance can be tailored from "hard" to "soft." In this work we employ low-cost fused filament fabrication 3D printer enabling point-by-point structuring of poly(lactic acid) (PLA) with~̴400 µm feature spatial resolution. The chosen architectures are defined as woodpiles (BCC, FCC and 60 deg rotating). The period is chosen to be of 1200 µm corresponding to 800 µm pores. The produced objects structural quality is characterized using scanning electron microscope, their mechanical properties such as flexural modulus, elastic modulus and stiffness are evaluated by measured experimentally using universal TIRAtest2300 machine. Within the limitation of the carried out study we show that the mechanical properties of 3D printed objects can be tuned at least 3 times by only changing the woodpile geometry arrangement, yet keeping the same filling factor and periodicity of the logs. Additionally, we demonstrate custom 3D printed µ-fluidic elements which can serve as cheap, biocompatible and environmentally biodegradable platforms for integrated Lab-On-Chip (LOC) devices.

  3. Representing Objects using Global 3D Relational Features for Recognition Tasks

    DEFF Research Database (Denmark)

    Mustafa, Wail

    2015-01-01

    In robotic systems, visual interpretations of the environment compose an essential element in a variety of applications, especially those involving manipulation of objects. Interpreting the environment is often done in terms of recognition of objects using machine learning approaches. For user...... robust color description, color calibration is performed. The framework was used in three recognition tasks: object instance recognition, object category recognition, and object spatial relationship recognition. For the object instance recognition task, we present a system that utilizes color and scale...... to initiate higher-level semantic interpretations of complex scenes. In the object category recognition task, we present a system that is capable of assigning multiple and nested categories for novel objects using a method developed for this purpose. Integrating this method with other multi-label learning...

  4. 3DMADMAC|AUTOMATED: synergistic hardware and software solution for automated 3D digitization of cultural heritage objects

    Directory of Open Access Journals (Sweden)

    Robert Sitnik

    2011-12-01

    Full Text Available In this article a fully automated 3D shape measurement system and data processing algorithms are presented. Main purpose of this system is to automatically (without any user intervention and rapidly (at least ten times faster than manual measurement digitize whole object’s surface with some limitations to its properties: maximum measurement volume is described as a cylinder with 2,8m height and 0,6m radius, maximum object's weight is 2 tons.  Measurement head is automatically calibrated by the system for chosen working volume (from 120mm x 80mm x 60mm and ends up to 1,2m x 0,8m x 0,6m. Positioning of measurement head in relation to measured object is realized by computer-controlled manipulator. The system is equipped with two independent collision detection modules to prevent damaging measured object with moving sensor’s head. Measurement process is divided into three steps. First step is used for locating any part of object’s surface in assumed measurement volume. Second step is related to calculation of "next best view" position of measurement head on the base of existing 3D scans. Finally small holes in measured 3D surface are detected and measured. All 3D data processing (filtering, ICP based fitting and final views integration is performed automatically. Final 3D model is created on the base of user specified parameters like accuracy of surface representation and/or density of surface sampling. In the last section of the paper, exemplary measurement result of two objects: biscuit (from the collection of Museum Palace at Wilanów and Roman votive altar (Lower Moesia, II-III AD are presented.

  5. Real time moving object detection using motor signal and depth map for robot car

    Science.gov (United States)

    Wu, Hao; Siu, Wan-Chi

    2013-12-01

    Moving object detection from a moving camera is a fundamental task in many applications. For the moving robot car vision, the background movement is 3D motion structure in nature. In this situation, the conventional moving object detection algorithm cannot be use to handle the 3D background modeling effectively and efficiently. In this paper, a novel scheme is proposed by utilizing the motor control signal and depth map obtained from a stereo camera to model the perspective transform matrix between different frames under a moving camera. In our approach, the coordinate relationship between frames during camera moving is modeled by a perspective transform matrix which is obtained by using current motor control signals and the pixel depth value. Hence, the relationship between a static background pixel and the moving foreground corresponding to the camera motion can be related by a perspective matrix. To enhance the robustness of classification, we allowed a tolerance range during the perspective transform matrix prediction and used multi-reference frames to classify the pixel on current frame. The proposed scheme has been found to be able to detect moving objects for our moving robot car efficiently. Different from conventional approaches, our method can model the moving background in 3D structure, without online model training. More importantly, the computational complexity and memory requirement are low making it possible to implement this scheme in real-time, which is even valuable for a robot vision system.

  6. Holographic microscopy reconstruction in both object and image half spaces with undistorted 3D grid

    CERN Document Server

    Verrier, Nicolas; Tessier, Gilles; Gross, Michel

    2015-01-01

    We propose a holographic microscopy reconstruction method, which propagates the hologram, in the object half space, in the vicinity of the object. The calibration yields reconstructions with an undistorted reconstruction grid i.e. with orthogonal x, y and z axis and constant pixels pitch. The method is validated with an USAF target imaged by a x60 microscope objective, whose holograms are recorded and reconstructed for different USAF locations along the longitudinal axis:-75 to +75 {\\mu}m. Since the reconstruction numerical phase mask, the reference phase curvature and MO form an afocal device, the reconstruction can be interpreted as occurring equivalently in the object or in image half space.

  7. A HIGHLY COLLIMATED WATER MASER BIPOLAR OUTFLOW IN THE CEPHEUS A HW3d MASSIVE YOUNG STELLAR OBJECT

    Energy Technology Data Exchange (ETDEWEB)

    Chibueze, James O.; Imai, Hiroshi; Tafoya, Daniel; Omodaka, Toshihiro; Chong, Sze-Ning [Department of Physics and Astronomy, Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Korimoto, Kagoshima 890-0065 (Japan); Kameya, Osamu; Hirota, Tomoya [Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Torrelles, Jose M., E-mail: james@milkyway.sci.kagoshima-u.ac.jp [Instituto de Ciencias del Espacio (CSIC)-UB/IEEC, Facultat de Fisica, Universitat de Barcelona, Marti i Franques 1, E-08028 Barcelona (Spain)

    2012-04-01

    We present the results of multi-epoch very long baseline interferometry (VLBI) water (H{sub 2}O) maser observations carried out with the VLBI Exploration of Radio Astrometry toward the Cepheus A HW3d object. We measured for the first time relative proper motions of the H{sub 2}O maser features, whose spatio-kinematics traces a compact bipolar outflow. This outflow looks highly collimated and expanding through {approx}280 AU (400 mas) at a mean velocity of {approx}21 km s{sup -1} ({approx}6 mas yr{sup -1}) without taking into account the turbulent central maser cluster. The opening angle of the outflow is estimated to be {approx}30 Degree-Sign . The dynamical timescale of the outflow is estimated to be {approx}100 years. Our results provide strong support that HW3d harbors an internal massive young star, and the observed outflow could be tracing a very early phase of star formation. We also have analyzed Very Large Array archive data of 1.3 cm continuum emission obtained in 1995 and 2006 toward Cepheus A. The comparative result of the HW3d continuum emission suggests the possibility of the existence of distinct young stellar objects in HW3d and/or strong variability in one of their radio continuum emission components.

  8. Planning Setpoints for Contact Force Transitions in Regrasp Tasks of 3D Objects

    NARCIS (Netherlands)

    Grosch, Patrick; Suarez, Raul; Carloni, Raffaella; Melchiorri, Claudio

    2008-01-01

    This paper presents a simple and fast solution to the problem of finding the time variation of n contact forces that keep an object under equilibrium while one of the n contact forces is removed/added from/to the grasp. The object is under a constant perturbation force, like for instance its own wei

  9. Artificial Vision in 3D Perspective. For Object Detection On Planes, Using Points Clouds.

    Directory of Open Access Journals (Sweden)

    Catalina Alejandra Vázquez Rodriguez

    2014-02-01

    Full Text Available In this paper, we talk about an algorithm of artificial vision for the robot Golem - II + with which to analyze the environment the robot, for the detection of planes and objects in the scene through point clouds, which were captured with kinect device, possible objects and quantity, distance and other characteristics. Subsequently the "clusters" are grouped to identify whether they are located on the same surface, in order to calculate the distance and the slope of the planes relative to the robot, and finally each object separately analyzed to see if it is possible to take them, if they are empty surfaces, may leave objects on them, long as feasible considering a distance, ignoring false positives as the walls and floor, which for these purposes are not of interest since it is not possible to place objects on the walls and floor are out of range of the robot's arms.

  10. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    Science.gov (United States)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  11. Retrieval of 3D-position of a Passive Object Using Infrared LED´s and Photodiodes

    DEFF Research Database (Denmark)

    Christensen, Henrik Vie

    A sensor using infrared emitter/receiver pairs to determine the position of a passive object is presented. An array with a small number of infrared emitter/receiver pairs are proposed as sensing part to acquire information on the object position. The emitters illuminates the object and the intens...... experiments shows good accordance between actual and retrieved positions when tracking a ball. The ball has been successfully replaced by a human hand, and a "3D non-touch screen" with a human hand as "pointing device" is shown possible....

  12. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    Science.gov (United States)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  13. Correlation and 3D-tracking of objects by pointing sensors

    Energy Technology Data Exchange (ETDEWEB)

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  14. Correlation and 3D-tracking of objects by pointing sensors

    Science.gov (United States)

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  15. A Method of Calculating the 3D Coordinates on a Micro Object in a Virtual Micro-Operation System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A simple method for calculating the 3D coordinates of points on a micro object in a multi-camera system is proposed. It simplifies the algorithms used in traditional computer vision system by eliminating the calculation of the CCD ( charge coupled device)camera parameters and the relative position between cameras, and using solid geometry in the calculation procedures instead of the calculation of the complex matrixes. The algorithm was used in the research of generating a virtual magnified 3D image of a micro object to be operated in a micro operation system, and the satisfactory results were obtained. The application in a virtual tele-operation system for a dexterous mechanical gripper is under test.

  16. 3D micro-XRF for cultural heritage objects: new analysis strategies for the investigation of the Dead Sea Scrolls.

    Science.gov (United States)

    Mantouvalou, Ioanna; Wolff, Timo; Hahn, Oliver; Rabin, Ira; Lühl, Lars; Pagels, Marcel; Malzer, Wolfgang; Kanngiesser, Birgit

    2011-08-15

    A combination of 3D micro X-ray fluorescence spectroscopy (3D micro-XRF) and micro-XRF was utilized for the investigation of a small collection of highly heterogeneous, partly degraded Dead Sea Scroll parchment samples from known excavation sites. The quantitative combination of the two techniques proves to be suitable for the identification of reliable marker elements which may be used for classification and provenance studies. With 3D micro-XRF, the three-dimensional nature, i.e. the depth-resolved elemental composition as well as density variations, of the samples was investigated and bromine could be identified as a suitable marker element. It is shown through a comparison of quantitative and semiquantitative values for the bromine content derived using both techniques that, for elements which are homogeneously distributed in the sample matrix, quantification with micro-XRF using a one-layer model is feasible. Thus, the possibility for routine provenance studies using portable micro-XRF instrumentation on a vast amount of samples, even on site, is obtained through this work.

  17. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    Energy Technology Data Exchange (ETDEWEB)

    Frary, R.; Louie, J. [UNR; Pullammanappallil, S. [Optim; Eisses, A.

    2016-08-01

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  18. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    OpenAIRE

    Chellappa, R; H. Moon

    2008-01-01

    Abstract We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the d...

  19. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    Science.gov (United States)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  20. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    Science.gov (United States)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  1. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    Science.gov (United States)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  2. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    Science.gov (United States)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  3. 3D shape shearography with integrated structured light projection for strain inspection of curved objects

    NARCIS (Netherlands)

    Anisimov, A.; Groves, R.M.

    2015-01-01

    Shearography (speckle pattern shearing interferometry) is a non-destructive testing technique that provides full-field surface strain characterization. Although real-life objects especially in aerospace, transport or cultural heritage are not flat (e.g. aircraft leading edges or sculptures), their i

  4. Binocular visual tracking and grasping of a moving object with a 3D trajectory predictor

    Directory of Open Access Journals (Sweden)

    J. Fuentes‐Pacheco

    2009-12-01

    Full Text Available This paper presents a binocular eye‐to‐hand visual servoing system that is able to track and grasp a moving object in real time.Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting futurepositions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with sixdegrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client‐server architecture and iscomposed of two main parts: the vision system and the control system. The vision system uses color detection to extract theobject from the background and a tracking technique based on search windows and object moments. The control system usesthe RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port.Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.

  5. Exploiting Higher Order and Multi-modal Features for 3D Object Detection

    DEFF Research Database (Denmark)

    Kiforenko, Lilita

    2017-01-01

    . The initial work introduces a feature descriptor that uses edge categorisation in combination with a local multi-modal histogram descriptor in order to detect objects with little or no texture or surface variation. The comparison is performed with a state-of-the-art method, which is outperformed...... by the presented edge descriptor. The second work presents an approach for robust detection of multiple objects by combining feature descriptors that capture both surface and edge information. This work presents quantitative results, where the performance of the developed feature descriptor combination is compared......-of-the-art descriptor and to this date, constant improvements of it are presented. The evaluation of PPFs is performed on seven publicly available datasets and it presents not only the performance comparison towards other popularly used methods, but also investigations of the space of possible point pair relations...

  6. THREE-IMAGE MATCHING.FOR 3-D LINEAR OBJECT TRACKING

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper will discuss strategies for trinocular image rectification and matching for linear object tracking. It is well known that a pair of stereo images generates two epipolar images. Three overlapped images can yield six epipolar images in situations where any two are required to be rectified for the purpose of image matching. In this case,the search for feature correspondences is computationally intensive and matching complexity increases. A special epipolar image rectification for three stereo images, which simplifies the image matching process, is therefore proposed. This method generates only three rectified images, with the result that the search for matching features becomes more straightforward. With the three rectified images, a particular line-segment-based correspondence strategy is suggested. The primary characteristics of the feature correspondence strategy include application of specific epipolar geometric constraints and reference to three-ray triangulation residuals in object space.

  7. Spatio-Temporal Video Object Segmentation via Scale-Adaptive 3D Structure Tensor

    Directory of Open Access Journals (Sweden)

    Hai-Yun Wang

    2004-06-01

    Full Text Available To address multiple motions and deformable objects' motions encountered in existing region-based approaches, an automatic video object (VO segmentation methodology is proposed in this paper by exploiting the duality of image segmentation and motion estimation such that spatial and temporal information could assist each other to jointly yield much improved segmentation results. The key novelties of our method are (1 scale-adaptive tensor computation, (2 spatial-constrained motion mask generation without invoking dense motion-field computation, (3 rigidity analysis, (4 motion mask generation and selection, and (5 motion-constrained spatial region merging. Experimental results demonstrate that these novelties jointly contribute much more accurate VO segmentation both in spatial and temporal domains.

  8. Towards a Vision Algorithm Compiler for Recognition of Partially Occluded 3-D Objects

    Science.gov (United States)

    1992-11-20

    0 2000 4000 (b) Actual Arem of (b) Mocle Surface Figure 5: Example distributions of a given feature value (area) over a model face. The distributions...and Paradigms, pages 564-584. Morgan Kaufmann, 1987. [GH91] W. Eric L. Grimson and Daniel P. Huttenlocher. On the verification of hy- pothesized...of Computer Vision and Pattern Recognition, pages 541-548, 1989. 51 [Hut88] Daniel P. Huttenlocher. Three-Dimensional Recognition of Solid Objects from

  9. A roadmap to global illumination in 3D scenes: solutions for GPU object recognition applications

    Science.gov (United States)

    Picos, Kenia; Díaz-Ramírez, Victor H.; Tapia, Juan J.

    2014-09-01

    Light interactions with matter is of remarkable complexity. An adequate modeling of global illumination is a vastly studied topic since the beginning of computer graphics, and still is an unsolved problem. The rendering equation for global illumination is based of refraction and reflection of light in interaction with matter within an environment. This physical process possesses a high computational complexity when implemented in a digital computer. The appearance of an object depends on light interactions with the surface of the material, such as emission, scattering, and absorption. Several image-synthesis methods have been used to realistically render the appearance of light incidence on an object. Recent global illumination algorithms employ mathematical models and computational strategies that improve the efficiency of the simulation solution. This work presents a review the state of the art of global illumination algorithms and focuses on the efficiency of the solution in a computational implementation in a graphics processing unit. A reliable system is developed to simulate realistics scenes in the context of real-time object recognition under different lighting conditions. Computer simulations results are presented and discussed in terms of discrimination capability, and robustness to additive noise, when considering several lighting model reflections and multiple light sources.

  10. Integrated view and path planning for a fully autonomous mobile-manipulator system for 3D object modeling

    OpenAIRE

    Torabi, Liila

    2011-01-01

    We have designed and implemented a fully autonomous system for building a 3D model of an object in situ. Our system assumes no knowledge of object other than that it is within a bounding box whose location and size are known a priori, and furthermore, the environment is unknown. The system consists of a mobile manipulator, a powerbot mobile base with a six degrees of freedom (DOF) powercube arm mounted on it. The arm and the powerbot are equipped with line-scan range sensors, which provide ra...

  11. Depth map calculation for a variable number of moving objects using Markov sequential object processes

    NARCIS (Netherlands)

    Lieshout, M.N.M. van

    2008-01-01

    We advocate the use of Markov sequential object processes for tracking a variable number of moving objects through video frames with a view towards depth calculation. A regression model based on a sequential object process quantifies goodness of fit; regularization terms are incorporated to control

  12. 3D profile measurements of objects by using zero order Generalized Morse Wavelet

    Science.gov (United States)

    Kocahan, Özlem; Durmuş, ćaǧla; Elmas, Merve Naz; Coşkun, Emre; Tiryaki, Erhan; Özder, Serhat

    2017-02-01

    Generalized Morse wavelets are proposed to evaluate the phase information from projected fringe pattern with the spatial carrier frequency in the x direction. The height profile of the object is determined through the phase change distribution by using the phase of the continuous wavelet transform. The phase distribution is extracted from the optical fringe pattern choosing zero order Generalized Morse Wavelet (GMW) as a mother wavelet. In this study, standard fringe projection technique is used for obtaining images. Experimental results for the GMW phase method are compared with the results of Morlet and Paul wavelet transform.

  13. Multi-objective optimization of a 3D vaneless diffuser based on fuzzy theory

    Institute of Scientific and Technical Information of China (English)

    Chuang GAO; Chuangang GU; Tong WANG; Xinwei SHU

    2008-01-01

    An optimization model based on fuzzy theory was set up and the corresponding Interactive modified simplex (IMS) method was developed to solve it. Both static pressure recovery and total pressure loss were considered in the model. Computational fluid dynamics (CFD) method was applied to solve the Reynolds-Averaged Navier-Stokes equation (RANS) and to find flow field distribution to get the value of the object function. After receiving the new shroud curve, grid movement and redrawing technology were adopted to avoid grid-line crossing and negative cells. The shroud curve was fitted with B-spline. The optimized results concur with the results reported in references.

  14. Prototyping a sensor enabled 3D citymodel on geospatial managed objects

    DEFF Research Database (Denmark)

    Kjems, Erik; Kolář, Jan

    2013-01-01

    by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping...... one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily...

  15. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    Directory of Open Access Journals (Sweden)

    R. Chellappa

    2008-03-01

    Full Text Available We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The “shape filter” has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  16. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    Directory of Open Access Journals (Sweden)

    Chellappa R

    2008-01-01

    Full Text Available Abstract We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The "shape filter" has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  17. 3D shape measurement of objects with high dynamic range of surface reflectivity.

    Science.gov (United States)

    Liu, Gui-hua; Liu, Xian-Yong; Feng, Quan-Yuan

    2011-08-10

    This paper presents a method that allows a conventional dual-camera structured light system to directly acquire the three-dimensional shape of the whole surface of an object with high dynamic range of surface reflectivity. To reduce the degradation in area-based correlation caused by specular highlights and diffused darkness, we first disregard these highly specular and dark pixels. Then, to solve this problem and further obtain unmatched area data, this binocular vision system was also used as two camera-projector monocular systems operated from different viewing angles at the same time to fill in missing data of the binocular reconstruction. This method involves producing measurable images by integrating such techniques as multiple exposures and high dynamic range imaging to ensure the capture of high-quality phase of each point. An image-segmentation technique was also introduced to distinguish which monocular system is suitable to reconstruct a certain lost point accurately. Our experiments demonstrate that these techniques extended the measurable areas on the high dynamic range of surface reflectivity such as specular objects or scenes with high contrast to the whole projector-illuminated field.

  18. Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging.

    Science.gov (United States)

    Wang, Yexin; Negahdaripour, Shahriar; Aykin, Murat D

    2016-08-20

    Establishing the projection model of imaging systems is critical in 3D reconstruction of object shapes from multiple 2D views. When deployed underwater, these are enclosed in waterproof housings with transparent glass ports that generate nonlinear refractions of optical rays at interfaces, leading to invalidation of the commonly assumed single-viewpoint (SVP) model. In this paper, we propose a non-SVP ray tracing model for the calibration of a projector-camera system, employed for 3D reconstruction based on the structured light paradigm. The projector utilizes dot patterns, having established that the contrast loss is less severe than for traditional stripe patterns in highly turbid waters. Experimental results are presented to assess the achieved calibrating accuracy.

  19. 2D and 3D object measurement for control and quality assurance in the industry

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    The subject of this dissertation is object measurement in the industry by use of computer vision. In the first part of the dissertation, the project is defined in an industrial frame. The reader is introduced to Odense Steel Shipyard and its current level of automation. The presentation gives...... an impression of the potential of vision technology in shipbuilding. The next chapter describes different important properties of industrial vision cameras. The presentation is based on practical experience obtained during the Ph.D. project. The geometry that defines the link between the observed world...... of OSS Mock-Up''. This report describes a preliminary attempt to apply a method of Euclidean reconstruction from a sequence of images on a ship block. The other three chapters describe vision installations that have been made at Odense Steel Shipyard. The first installation uses vision for check...

  20. 2D and 3D object measurement for control and quality assurance in the industry

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    and the projected image is the subject of the two next chapters. The first chapter gives a short introduction to projective algebra, which is extremely useful for modelling the image projection and the relation between more images of the same object viewed from different positions. It provides a basis...... for understanding many of the results later in the dissertation. In the second chapter a variety of different camera models are described. The relation between different models is explained and a guide is given to the interpretation of the model parameters. The following chapter deals with the problem of camera...... of the geometry is only relevant if features can be detected accurately in the images. This is the subject of the next chapter, where reference mark detection and straight edge detection are treated in two separate sections. The detection of reference marks is based on a parametric model, and it is shown...

  1. ELTs Adaptive Optics for Multi-Objects 3D Spectroscopy Key Parameters and Design Rules

    CERN Document Server

    Neichel, B; Fusco, T; Gendron, E; Puech, M; Rousset, G; Hammer, F

    2006-01-01

    In the last few years, new Adaptive Optics [AO] techniques have emerged to answer new astronomical challenges: Ground-Layer AO [GLAO] and Multi-Conjugate AO [MCAO] to access a wider Field of View [FoV], Multi-Object AO [MOAO] for the simultaneous observation of several faint galaxies, eXtreme AO [XAO] for the detection of faint companions. In this paper, we focus our study to one of these applications : high red-shift galaxy observations using MOAO techniques in the framework of Extremely Large Telescopes [ELTs]. We present the high-level specifications of a dedicated instrument. We choose to describe the scientific requirements with the following criteria : 40% of Ensquared Energy [EE] in H band (1.65um) and in an aperture size from 25 to 150 mas. Considering these specifications we investigate different AO solutions thanks to Fourier based simulations. Sky Coverage [SC] is computed for Natural and Laser Guide Stars [NGS, LGS] systems. We show that specifications are met for NGS-based systems at the cost of ...

  2. Perception of 3D spatial relations for 3D displays

    Science.gov (United States)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  3. SSV3D: Simulador de Sombras Vectoriales por Radiación Solar sobre Objetos Tridimensionales SSV3D: Simulator of Vectorial Shadows by Solar Radiation on 3D Computerized Objects

    Directory of Open Access Journals (Sweden)

    S. Gómez

    2005-01-01

    Full Text Available Se presenta un simulador de sombras vectoriales por radiación solar sobre objetos tridimensionales, SSV3D, una herramienta de computación gráfica desarrollada sobre la plataforma tridimensional del AUTOCAD 2004. El software simula vectorialmente la radiación solar directa, calculando y trazando los contornos de sombra sobre los planos iluminados del modelo 3D evaluado. En el desarrollo de la herramienta se comprobaron los resultados analíticos mediante su comparación con los obtenidos en las fórmulas de una hoja de cálculo, y de los resultados gráficos mediante comparación con las sombras arrojadas por simulación con un heliodón de tecnología francesa y por el Render de AUTOCAD. El simulador SSV3D respondió satisfactoriamente a las necesidades de estudio de sistemas de protección solar en investigaciones desarrolladas anteriormente.SSV3D is presented as a graphic computer tool developed on the three-dimensional platform of AUTOCAD 2004, which simulates direct solar radiation by measuring and vectorial tracing of shadow outlines on illuminated plans of the 3D model evaluated. The analytical results of this tool were tested during its' development by comparing its' results with those obtained in the formula of a calculus sheet, and graphic results were checked comparing these to the shadows obtained by simulation using physical models in a heliodon (French technology and by the Render of AUTOCAD. The SSV3D simulator responded satisfactorily to the requirements for the study of solar protection systems which had been determined in previous research.

  4. Reference Frames and 3-D Shape Perception of Pictured Objects: On Verticality and Viewpoint-From-Above.

    Science.gov (United States)

    Cornelis, Els V K; van Doorn, Andrea J; Wagemans, Johan

    2016-05-01

    Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame.

  5. Determinig of an object orientation in 3D space using direction cosine matrix and non-stationary Kalman filter

    Directory of Open Access Journals (Sweden)

    Bieda Robert

    2016-06-01

    Full Text Available This paper describes a method which determines the parameters of an object orientation in 3D space. The rotation angles calculation bases on the signals fusion obtained from the inertial measurement unit (IMU. The IMU measuring system provides information from a linear acceleration sensors (accelerometers, the Earth’s magnetic field sensors (magnetometers and the angular velocity sensors (gyroscopes. Information about the object orientation is presented in the form of direction cosine matrix whose elements are observed in the state vector of the non-stationary Kalman filter. The vector components allow to determine the rotation angles (roll, pitch and yaw associated with the object. The resulting waveforms, for different rotation angles, have no negative attributes associated with the construction and operation of the IMU measuring system. The described solution enables simple, fast and effective implementation of the proposed method in the IMU measuring systems.

  6. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Suh, T [Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Han, B; Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); Jenkins, C [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); Department of Mechanical Engineering, Stanford University, Palo Alto, CA (United States)

    2015-06-15

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured for TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery.

  7. Expanding the degree of freedom of observation on depth-direction by the triple-separated slanted parallax barrier in autostereoscopic 3D display

    Science.gov (United States)

    Lee, Kwang-Hoon; Choe, Yeong-Seon; Lee, Dong-Kil; Kim, Yang-Gyu; Park, Youngsik; Park, Min-Chul

    2013-05-01

    Autostereoscopic multi-views 3D display system has a narrow freedom of degrees to the observational directions such as horizontal and perpendicular direction to the display plane than the glasses on type. In this paper, we proposed an innovative method that expanding a width of formed viewing zone on the depth direction keeping with the number of views on horizontal direction by using the triple segmented-slanted parallax barrier (TS-SPB) in the glasses-off type of 3D display. The validity of the proposal is verified by optical simulation based on the environment similar to an actual case. In benefits, the maximum number of views to display on horizontal direction is to be 2n and the width of viewing zone on depth direction is to be increased up to 3.36 times compared to the existing one-layered parallax barrier system.

  8. 一种三维物体相息图的迭代计算方法%An Iterative Algorithm for Kinoform Computation of 3D Object

    Institute of Scientific and Technical Information of China (English)

    裴闯; 蒋晓瑜; 王加; 张鹏炜

    2013-01-01

    在传统迭代傅里叶变换算法的基础上,提出了一种计算三维物体相息图的新方法.基于层析法将三维物体的多个分层物面作为衍射再现图像,在一个输入面(相息图)和多个输出面(再现像)之间进行迭代.通过在傅里叶迭代运算中引入距离相位因子,表示物体不同物面的深度,体现了物体的三维特征.实验结果证明了本文算法良好的收敛特性和再现性能.最后,分析了物面数量和间距对全息再现质量的影响,利用液晶空间光调制器采用时分复用的方法还原了三维物体的多个物面.%A novel method for computing kinoform of 3D object based on traditional iterative Fourier transform algorithm is described. The method divides three-dimensional object into many object planes by tomographic technique and treat every object plane as a target image, then iterative computation is carried out between one input plane(kinoform) and several output planes (reconstruction images). A space phase factor is added into iterative process to represent depth characters of 3D object. The experimental result shows that this algorithm computational and convergent velocity is fast. At last, the influences of object planes number and distance to reconstruction quality of kinoform are analyzed, and time-division multiplexing technique is used to reconstruct several object planes based on spatial light modulator.

  9. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    Science.gov (United States)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  10. Visual retrieval of known objects using supplementary depth data

    Science.gov (United States)

    Śluzek, Andrzej

    2016-06-01

    A simple modification of typical content-based visual information retrieval (CBVIR) techniques (e.g. MSER keypoints represented by SIFT descriptors quantized into sufficiently large vocabularies) is discussed and preliminarily evaluated. By using the approximate depths (as the supplementary data) of the detected keypoints, we can significantly improve credibility of keypoint matching so that known objects (i.e. objects for which exemplary images are available in the database) can be detected at low computational costs. Thus, the method can be particularly useful in real-time applications of machine vision systems (e.g. in intelligent robotic devices). The paper presents theoretical model of the method and provides exemplary results for selected scenarios.

  11. Method for 3D Object Reconstruction Using Several Portion of 2D Images from the Different Aspects Acquired with Image Scopes Included in the Fiber Retractor

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2012-12-01

    Full Text Available Method for 3D object reconstruction using several portions of 2D images from the different aspects which are acquired with image scopes included in the fiber retractor is proposed. Experimental results show a great possibilityfor reconstruction of acceptable quality of 3D object on the computer with several imageswhich are viewed from the different aspects of 2D images.

  12. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR - combining techniques towards an objective multiparametric interpretation

    Science.gov (United States)

    Schneiderwind, S.; Mason, J.; Wiatr, T.; Papanikolaou, I.; Reicherter, K.

    2015-09-01

    Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination) were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i) how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii) how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-)investigations.

  13. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR – combining techniques towards an objective multiparametric interpretation

    Directory of Open Access Journals (Sweden)

    S. Schneiderwind

    2015-09-01

    Full Text Available Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-investigations.

  14. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    Science.gov (United States)

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  15. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    Science.gov (United States)

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  16. Shape and motion reconstruction from 3D-to-1D orthographically projected data via object-image relations.

    Science.gov (United States)

    Ferrara, Matthew; Arnold, Gregory; Stuff, Mark

    2009-10-01

    This paper describes an invariant-based shape- and motion reconstruction algorithm for 3D-to-1D orthographically projected range data taken from unknown viewpoints. The algorithm exploits the object-image relation that arises in echo-based range data and represents a simplification and unification of previous work in the literature. Unlike one proposed approach, this method does not require uniqueness constraints, which makes its algorithmic form independent of the translation removal process (centroid removal, range alignment, etc.). The new algorithm, which simultaneously incorporates every projection and does not use an initialization in the optimization process, requires fewer calculations and is more straightforward than the previous approach. Additionally, the new algorithm is shown to be the natural extension of the approach developed by Tomasi and Kanade for 3D-to-2D orthographically projected data and is applied to a realistic inverse synthetic aperture radar imaging scenario, as well as experiments with varying amounts of aperture diversity and noise.

  17. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Science.gov (United States)

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  18. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Directory of Open Access Journals (Sweden)

    Carlos M. Mateo

    2016-05-01

    Full Text Available Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor

  19. A Mathematical and Numerically Integrable Modeling of 3D Object Grasping under Rolling Contacts between Smooth Surfaces

    Directory of Open Access Journals (Sweden)

    Suguru Arimoto

    2011-01-01

    Full Text Available A computable model of grasping and manipulation of a 3D rigid object with arbitrary smooth surfaces by multiple robot fingers with smooth fingertip surfaces is derived under rolling contact constraints between surfaces. Geometrical conditions of pure rolling contacts are described through the moving-frame coordinates at each rolling contact point under the postulates: (1 two surfaces share a common single contact point without any mutual penetration and a common tangent plane at the contact point and (2 each path length of running of the contact point on the robot fingertip surface and the object surface is equal. It is shown that a set of Euler-Lagrange equations of motion of the fingers-object system can be derived by introducing Lagrange multipliers corresponding to geometric conditions of contacts. A set of 1st-order differential equations governing rotational motions of each fingertip and the object and updating arc-length parameters should be accompanied with the Euler-Lagrange equations. Further more, nonholonomic constraints arising from twisting between the two normal axes to each tangent plane are rewritten into a set of Frenet-Serre equations with a geometrically given normal curvature and a motion-induced geodesic curvature.

  20. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    Science.gov (United States)

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.

  1. Acquiring multi-viewpoint image of 3D object for integral imaging using synthetic aperture phase-shifting digital holography

    Science.gov (United States)

    Jeong, Min-Ok; Kim, Nam; Park, Jae-Hyeung; Jeon, Seok-Hee; Gil, Sang-Keun

    2009-02-01

    We propose a method generating elemental images for the auto-stereoscopic three-dimensional display technique, integral imaging, using phase-shifting digital holography. Phase shifting digital holography is a way recording the digital hologram by changing phase of the reference beam and extracting the complex field of the object beam. Since all 3D information is captured by the phase-shifting digital holography, the elemental images for any specifications of the lens array can be generated from single phase-shifting digital holography. We expanded the viewing angle of the generated elemental image by using the synthetic aperture phase-shifting digital hologram. The principle of the proposed method is verified experimentally.

  2. Depth-of-Focus Correction in Single-Molecule Data Allows Analysis of 3D Diffusion of the Glucocorticoid Receptor in the Nucleus.

    Directory of Open Access Journals (Sweden)

    Rolf Harkes

    Full Text Available Single-molecule imaging of proteins in a 2D environment like membranes has been frequently used to extract diffusive properties of multiple fractions of receptors. In a 3D environment the apparent fractions however change with observation time due to the movements of molecules out of the depth-of-field of the microscope. Here we developed a mathematical framework that allowed us to correct for the change in fraction size due to the limited detection volume in 3D single-molecule imaging. We applied our findings on the mobility of activated glucocorticoid receptors in the cell nucleus, and found a freely diffusing fraction of 0.49±0.02. Our analysis further showed that interchange between this mobile fraction and an immobile fraction does not occur on time scales shorter than 150 ms.

  3. Bringing Cosmic Objects Down to Earth: An Overview of 3D Modelling and Printing in Astronomy and Astronomy Communication

    Science.gov (United States)

    Arcand, K.; Megan, W.; DePasquale, J.; Jubett, A.; Edmonds, P.; DiVona, K.

    2017-09-01

    Three-dimensional (3D) modelling is more than just good fun, it offers a new vehicle to represent and understand scientific data and gives experts and non-experts alike the ability to manipulate models and gain new perspectives on data. This article explores the use of 3D modelling and printing in astronomy and astronomy communication and looks at some of the practical challenges, and solutions, to using 3D modelling, visualisation and printing in this way.

  4. Identifying Objective EEG Based Markers of Linear Vection in Depth

    Science.gov (United States)

    Palmisano, Stephen; Barry, Robert J.; De Blasio, Frances M.; Fogarty, Jack S.

    2016-01-01

    This proof-of-concept study investigated whether a time-frequency EEG approach could be used to examine vection (i.e., illusions of self-motion). In the main experiment, we compared the event-related spectral perturbation (ERSP) data of 10 observers during and directly after repeated exposures to two different types of optic flow display (each was 35° wide by 29° high and provided 20 s of motion stimulation). Displays consisted of either a vection display (which simulated constant velocity forward self-motion in depth) or a control display (a spatially scrambled version of the vection display). ERSP data were decomposed using time-frequency Principal Components Analysis (t–f PCA). We found an increase in 10 Hz alpha activity, peaking some 14 s after display motion commenced, which was positively associated with stronger vection ratings. This followed decreases in beta activity, and was also followed by a decrease in delta activity; these decreases in EEG amplitudes were negatively related to the intensity of the vection experience. After display motion ceased, a series of increases in the alpha band also correlated with vection intensity, and appear to reflect vection- and/or motion-aftereffects, as well as later cognitive preparation for reporting the strength of the vection experience. Overall, these findings provide support for the notion that EEG can be used to provide objective markers of changes in both vection status (i.e., “vection/no vection”) and vection strength. PMID:27559328

  5. Physical security and cyber security issues and human error prevention for 3D printed objects: detecting the use of an incorrect printing material

    Science.gov (United States)

    Straub, Jeremy

    2017-06-01

    A wide variety of characteristics of 3D printed objects have been linked to impaired structural integrity and use-efficacy. The printing material can also have a significant impact on the quality, utility and safety characteristics of a 3D printed object. Material issues can be created by vendor issues, physical security issues and human error. This paper presents and evaluates a system that can be used to detect incorrect material use in a 3D printer, using visible light imaging. Specifically, it assesses the ability to ascertain the difference between materials of different color and different types of material with similar coloration.

  6. Object-oriented philosophy in designing adaptive finite-element package for 3D elliptic deferential equations

    Science.gov (United States)

    Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.

    2007-12-01

    Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.

  7. EUROPEANA AND 3D

    Directory of Open Access Journals (Sweden)

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  8. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    Science.gov (United States)

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  9. Reconstruction and analysis of shapes from 3D scans

    NARCIS (Netherlands)

    ter Haar, F.B.

    2009-01-01

    In this thesis we use 3D laser range scans for the acquisition, reconstruction, and analysis of 3D shapes. 3D laser range scanning has proven to be a fast and effective way to capture the surface of an object in a computer. Thousands of depth measurements represent a part of the surface geometry as

  10. 3D Object Visual Tracking for the 220 kV/330 kV High-Voltage Live-Line Insulator Cleaning Robot

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian; YANG Ru-qing

    2009-01-01

    The 3D object visual tracking problem is studied for the robot vision system of the 220 kV/330 kV high-voltage live-line insulator cleaning robot. The SUSAN Edge based Scale Invariant Feature (SESIF) algorithm based 3D objects visual tracking is achieved in three stages: the first frame stage, tracking stage, and recovering stage. An SESIF based objects recognition algorithm is proposed to fred initial location at both the first frame stage and recovering stage. An SESIF and Lie group based visual tracking algorithm is used to track 3D object. Experiments verify the algorithm's robustness. This algorithm will be used in the second generation of the 220 kV/330 kV high-voltage five-line insulator cleaning robot.

  11. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    Science.gov (United States)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  12. Algorithm and System of Scanning Color 3D Objects%三维彩色扫描系统及算法

    Institute of Scientific and Technical Information of China (English)

    许智钦; 孙长库; 郑义忠

    2002-01-01

    This paper presents a complete system for scanning the geometry and texture of a large 3D object, then the automatic registration is performed to obtain a whole realistic 3D model. This system is composed of one line-strip laser and one color CCD camera. The scanned object is pictured twice by a color CCD camera. First, the texture of the scanned object is taken by a color CCD camera. Then the 3D information of the scanned object is obtained from laser plane equations. This paper presents a practical way to implement the three-dimensional measuring method and the automatic registration of a large 3D object and a pretty good result is obtained after experiment verification.%提出了一种大尺寸3D物体几何形状的扫描测量系统,该系统由一个线结构光源和一个彩色CCD摄像机组成.CCD摄像机两次摄取被扫描物体的图像,首先从激光平面中获取被扫描物体的三维信息,然后将3D物体的彩色信息和三维信息自动进行叠合,以获得3D物体的真实模型.

  13. Unmanned Aerial Vehicle-Based Photogrammetry Using Automatic Capture And Point Of Interest For Object Reconstruction Of Large Scale 3d Architecture

    Directory of Open Access Journals (Sweden)

    Andria K. Wahyudi

    2016-10-01

    Full Text Available Large-scale architecture object are a complicated target for 3D Reconstruction. UAV is a common choice to take RAW pictures from the air. Manual control for Unmanned Aerial Vehicle (UAV can be difficult to perform picture taking and filight control simultaneously. This Paper discusses the Use of UAV for 3D Reconstruction using photogrammetry techniques. This study shows a Point Of Interest (POI for object point to be reconstructed and shooting automatically. With an existing SDK, UAVs can be monitored using the Android smartphone. In this investigation it has been confirmed that the POI and auto-capture techniques can generate models with high precision, with good texture quality and taking a short flight time. This study also shows optimal results in 3D Reconstruction.

  14. Objective assessment and design improvement of a staring, sparse transducer array by the spatial crosstalk matrix for 3D photoacoustic tomography.

    Directory of Open Access Journals (Sweden)

    Philip Wong

    Full Text Available Accurate reconstruction of 3D photoacoustic (PA images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT staring array system and analyze system performance using singular value decomposition (SVD. The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate. The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization.

  15. Objective assessment and design improvement of a staring, sparse transducer array by the spatial crosstalk matrix for 3D photoacoustic tomography.

    Science.gov (United States)

    Wong, Philip; Kosik, Ivan; Raess, Avery; Carson, Jeffrey J L

    2015-01-01

    Accurate reconstruction of 3D photoacoustic (PA) images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT) staring array system and analyze system performance using singular value decomposition (SVD). The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate). The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization.

  16. Objective Assessment and Design Improvement of a Staring, Sparse Transducer Array by the Spatial Crosstalk Matrix for 3D Photoacoustic Tomography

    Science.gov (United States)

    Kosik, Ivan; Raess, Avery

    2015-01-01

    Accurate reconstruction of 3D photoacoustic (PA) images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT) staring array system and analyze system performance using singular value decomposition (SVD). The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate). The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization. PMID:25875177

  17. Object classfication from RGB-D images using depth context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Context cue is important in object classification. By embedding the depth context cue of image attributes into kernel descriptors, we propose a new set of depth image descriptors called depth context kernel descriptors (DCKD) for RGB-D based object classification. The motivation of DCKD is to use...... the depth consistency of image attributes defined within a neighboring region to improve the robustness of descriptor matching in the kernel space. Moreover, a novel joint spatial-depth pooling (JSDP) scheme, which further partitions image sub-regions using the depth cue and pools features in both 2D image...

  18. Monocular Depth Perception and Robotic Grasping of Novel Objects

    Science.gov (United States)

    2009-06-01

    in which local features were insufficient and more contextual information had to be used. Examples include image denoising [92], stereo vision [155... partially visible in the image (e.g., Fig. 3.2, row 2: tree on the left). For a point lying on such an object, most of the point’s neighbors lie outside...proved the equivalence of force-closure analysis with the study of the equilibria of an ordinary differential equation . All of these methods focussed

  19. Comparison of publically available Moho depth and crustal thickness grids with newly derived grids by 3D gravity inversion for the High Arctic region.

    Science.gov (United States)

    Lebedeva-Ivanova, Nina; Gaina, Carmen; Minakov, Alexander; Kashubin, Sergey

    2016-04-01

    We derived Moho depth and crustal thickness for the High Arctic region by 3D forward and inverse gravity modelling method in the spectral domain (Minakov et al. 2012) using lithosphere thermal gravity anomaly correction (Alvey et al., 2008); a vertical density variation for the sedimentary layer and lateral crustal variation density. Recently updated grids of bathymetry (Jakobsson et al., 2012), gravity anomaly (Gaina et al, 2011) and dynamic topography (Spasojevic & Gurnis, 2012) were used as input data for the algorithm. TeMAr sedimentary thickness grid (Petrov et al., 2013) was modified according to the most recently published seismic data, and was re-gridded and utilized as input data. Other input parameters for the algorithm were calibrated using seismic crustal scale profiles. The results are numerically compared with publically available grids of the Moho depth and crustal thickness for the High Arctic region (CRUST 1 and GEMMA global grids; the deep Arctic Ocean grids by Glebovsky et al., 2013) and seismic crustal scale profiles. The global grids provide coarser resolution of 0.5-1.0 geographic degrees and not focused on the High Arctic region. Our grids better capture all main features of the region and show smaller error in relation to the seismic crustal profiles compare to CRUST 1 and GEMMA grids. Results of 3D gravity modelling by Glebovsky et al. (2013) with separated geostructures approach show also good fit with seismic profiles; however these grids cover the deep part of the Arctic Ocean only. Alvey A, Gaina C, Kusznir NJ, Torsvik TH (2008). Integrated crustal thickness mapping and plate recon-structions for the high Arctic. Earth Planet Sci Lett 274:310-321. Gaina C, Werner SC, Saltus R, Maus S (2011). Circum-Arctic mapping project: new magnetic and gravity anomaly maps of the Arctic. Geol Soc Lond Mem 35, 39-48. Glebovsky V.Yu., Astafurova E.G., Chernykh A.A., Korneva M.A., Kaminsky V.D., Poselov V.A. (2013). Thickness of the Earth's crust in the

  20. Volumetric 3D display using a DLP projection engine

    Science.gov (United States)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  1. Satellite and Surface Data Synergy for Developing a 3D Cloud Structure and Properties Characterization Over the ARM SGP. Stage 1: Cloud Amounts, Optical Depths, and Cloud Heights Reconciliation

    Science.gov (United States)

    Genkova, I.; Long, C. N.; Heck, P. W.; Minnis, P.

    2003-01-01

    One of the primary Atmospheric Radiation Measurement (ARM) Program objectives is to obtain measurements applicable to the development of models for better understanding of radiative processes in the atmosphere. We address this goal by building a three-dimensional (3D) characterization of the cloud structure and properties over the ARM Southern Great Plains (SGP). We take the approach of juxtaposing the cloud properties as retrieved from independent satellite and ground-based retrievals, and looking at the statistics of the cloud field properties. Once these retrievals are well understood, they will be used to populate the 3D characterization database. As a first step we determine the relationship between surface fractional sky cover and satellite viewing angle dependent cloud fraction (CF). We elaborate on the agreement intercomparing optical depth (OD) datasets from satellite and ground using available retrieval algorithms with relation to the CF, cloud height, multi-layer cloud presence, and solar zenith angle (SZA). For the SGP Central Facility, where output from the active remote sensing cloud layer (ARSCL) valueadded product (VAP) is available, we study the uncertainty of satellite estimated cloud heights and evaluate the impact of this uncertainty for radiative studies.

  2. Developing a 3-D Digital Heritage Ecosystem: from object to representation and the role of a virtual museum in the 21st century

    Directory of Open Access Journals (Sweden)

    Fred Limp

    2011-07-01

    Full Text Available This article addresses the application of high-precision 3-D recording methods to heritage materials (portable objects, the technical processes involved, the various digital products and the role of 3-D recording in larger questions of scholarship and public interpretation. It argues that the acquisition and creation of digital representations of heritage must be part of a comprehensive research infrastructure (a digital ecosystem that focuses on all of the elements involved, including (a recording methods and metadata, (b digital object discovery and access, (c citation of digital objects, (d analysis and study, (e digital object reuse and repurposing, and (f the critical role of a national/international digital archive. The article illustrates these elements and their relationships using two case studies that involve similar approaches to the high-precision 3-D digital recording of portable archaeological objects, from a number of late pre-Columbian villages and towns in the mid-central US (c. 1400 CE and from the Egyptian site of Amarna, the Egyptian Pharaoh Akhenaten's capital (c. 1300 BCE.

  3. The effect of monocular depth cues on the detection of moving objects by moving observers

    National Research Council Canada - National Science Library

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-01-01

    ... and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects...

  4. Exploiting Depth From Single Monocular Images for Object Detection and Semantic Segmentation

    Science.gov (United States)

    Cao, Yuanzhouhan; Shen, Chunhua; Shen, Heng Tao

    2017-02-01

    Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. Additionally, we propose an RGB-D semantic segmentation method which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several datasets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.

  5. Tracking 3D Moving Objects Based on GPS/IMU Navigation Solution, Laser Scanner Point Cloud and GIS Data

    Directory of Open Access Journals (Sweden)

    Siavash Hosseinyalamdary

    2015-07-01

    Full Text Available Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM, can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS and Inertial Measurement Unit (IMU navigation solution.

  6. The Scheme and the Preliminary Test of Object-Oriented Simultaneous 3D Geometric and Physical Change Detection Using GIS-guided Knowledge

    Directory of Open Access Journals (Sweden)

    Chang LI

    2013-07-01

    Full Text Available Current methods of remotely sensed image change detection almost assume that the DEM of the surface objects do not change. However, for the geological disasters areas (such as: landslides, mudslides and avalanches, etc., this assumption does not hold. And the traditional approach is being challenged. Thus, a new theory for change detection needs to be extended from two-dimensional (2D to three-dimensional (3D urgently. This paper aims to present an innovative scheme for change detection method, object-oriented simultaneous three-dimensional geometric and physical change detection (OOS3DGPCD using GIS-guided knowledge. This aim will be reached by realizing the following specific objectives: a to develop a set of automatic multi-feature matching and registration methods; b to propose an approach for simultaneous detecting 3D geometric and physical attributes changes based on the object-oriented strategy; c to develop a quality control method for OOS3DGPCD; d to implement the newly proposed OOS3DGPCD method by designing algorithms and developing a prototype system. For aerial remotely sensed images of YingXiu, Wenchuan, preliminary experimental results of 3D change detection are shown so as to verify our approach.

  7. 3D and Education

    Science.gov (United States)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  8. Designing and using prior data in Ankylography: Recovering a 3D object from a single diffraction intensity pattern

    CERN Document Server

    Osherovich, Eliyahu; Eldar, Yonina C; Segev, Mordechai

    2012-01-01

    We present a novel method for Ankylography: three-dimensional structure reconstruction from a single shot diffraction intensity pattern. Our approach allows reconstruction of objects containing many more details than was ever demonstrated, in a faster and more accurate fashion

  9. Objective 3D surface evaluation of intracranial electrophysiologic correlates of cerebral glucose metabolic abnormalities in children with focal epilepsy.

    Science.gov (United States)

    Jeong, Jeong-Won; Asano, Eishi; Kumar Pilli, Vinod; Nakai, Yasuo; Chugani, Harry T; Juhász, Csaba

    2017-03-21

    To determine the spatial relationship between 2-deoxy-2[(18) F]fluoro-D-glucose (FDG) metabolic and intracranial electrophysiological abnormalities in children undergoing two-stage epilepsy surgery, statistical parametric mapping (SPM) was used to correlate hypo- and hypermetabolic cortical regions with ictal and interictal electrocorticography (ECoG) changes mapped onto the brain surface. Preoperative FDG-PET scans of 37 children with intractable epilepsy (31 with non-localizing MRI) were compared with age-matched pseudo-normal pediatric control PET data. Hypo-/hypermetabolic maps were transformed to 3D-MRI brain surface to compare the locations of metabolic changes with electrode coordinates of the ECoG-defined seizure onset zone (SOZ) and interictal spiking. While hypometabolic clusters showed a good agreement with the SOZ on the lobar level (sensitivity/specificity = 0.74/0.64), detailed surface-distance analysis demonstrated that large portions of ECoG-defined SOZ and interictal spiking area were located at least 3 cm beyond hypometabolic regions with the same statistical threshold (sensitivity/specificity = 0.18-0.25/0.94-0.90 for overlap 3-cm distance); for a lower threshold, sensitivity for SOZ at 3 cm increased to 0.39 with a modest compromise of specificity. Performance of FDG-PET SPM was slightly better in children with smaller as compared with widespread SOZ. The results demonstrate that SPM utilizing age-matched pseudocontrols can reliably detect the lobe of seizure onset. However, the spatial mismatch between metabolic and EEG epileptiform abnormalities indicates that a more complete SOZ detection could be achieved by extending intracranial electrode coverage at least 3 cm beyond the metabolic abnormality. Considering that the extent of feasible electrode coverage is limited, localization information from other modalities is particularly important to optimize grid coverage in cases of large hypometabolic cortex. Hum Brain Mapp, 2017. © 2017

  10. Coherent digital demodulation of single-camera N-projections for 3D-object shape measurement: co-phased profilometry.

    Science.gov (United States)

    Servin, M; Garnica, G; Estrada, J C; Quiroga, A

    2013-10-21

    Fringe projection profilometry is a well-known technique to digitize 3-dimensional (3D) objects and it is widely used in robotic vision and industrial inspection. Probably the single most important problem in single-camera, single-projection profilometry are the shadows and specular reflections generated by the 3D object under analysis. Here a single-camera along with N-fringe-projections is (digital) coherent demodulated in a single-step, solving the shadows and specular reflections problem. Co-phased profilometry coherently phase-demodulates a whole set of N-fringe-pattern perspectives in a single demodulation and unwrapping process. The mathematical theory behind digital co-phasing N-fringe-patterns is mathematically similar to co-phasing a segmented N-mirror telescope.

  11. An in-depth spectroscopic examination of molecular bands from 3D hydrodynamical model atmospheres. I. Formation of the G-band in metal-poor dwarf stars

    Science.gov (United States)

    Gallagher, A. J.; Caffau, E.; Bonifacio, P.; Ludwig, H.-G.; Steffen, M.; Spite, M.

    2016-09-01

    Context. Recent developments in the three-dimensional (3D) spectral synthesis code Linfor3D have meant that for the first time, large spectral wavelength regions, such as molecular bands, can be synthesised with it in a short amount of time. Aims: A detailed spectral analysis of the synthetic G-band for several dwarf turn-off-type 3D atmospheres (5850 ≲ Teff [ K ] ≲ 6550, 4.0 ≤ log g ≤ 4.5, - 3.0 ≤ [Fe/H] ≤-1.0) was conducted, under the assumption of local thermodynamic equilibrium. We also examine carbon and oxygen molecule formation at various metallicity regimes and discuss the impact it has on the G-band. Methods: Using a qualitative approach, we describe the different behaviours between the 3D atmospheres and the traditional one-dimensional (1D) atmospheres and how the different physics involved inevitably leads to abundance corrections, which differ over varying metallicities. Spectra computed in 1D were fit to every 3D spectrum to determine the 3D abundance correction. Results: Early analysis revealed that the CH molecules that make up the G-band exhibited an oxygen abundance dependency; a higher oxygen abundance leads to weaker CH features. Nitrogen abundances showed zero impact to CH formation. The 3D corrections are also stronger at lower metallicity. Analysis of the 3D corrections to the G-band allows us to assign estimations of the 3D abundance correction to most dwarf stars presented in the literature. Conclusions: The 3D corrections suggest that A(C) in carbon-enhanced metal-poor (CEMP) stars with high A(C) would remain unchanged, but would decrease in CEMP stars with lower A(C). It was found that the C/O ratio is an important parameter to the G-band in 3D. Additional testing confirmed that the C/O ratio is an equally important parameter for OH transitions under 3D. This presents a clear interrelation between the carbon and oxygen abundances in 3D atmospheres through their molecular species, which is not seen in 1D.

  12. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    Science.gov (United States)

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning. © 2012 American Association of Anatomists.

  13. Modeling near-field radiative heat transfer from sharp objects using a general 3d numerical scattering technique

    CERN Document Server

    McCauley, Alexander P; Krüger, Matthias; Johnson, Steven G

    2011-01-01

    We examine the non-equilibrium radiative heat transfer between a plate and finite cylinders and cones, making the first accurate theoretical predictions for the total heat transfer and the spatial heat flux profile for three-dimensional compact objects including corners or tips. We find qualitatively different scaling laws for conical shapes at small separations, and in contrast to a flat/slightly-curved object, a sharp cone exhibits a local \\emph{minimum} in the spatially resolved heat flux directly below the tip. The method we develop, in which a scattering-theory formulation of thermal transfer is combined with a boundary-element method for computing scattering matrices, can be applied to three-dimensional objects of arbitrary shape.

  14. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    Science.gov (United States)

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  15. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    Science.gov (United States)

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  16. Spatial Carrier Bi-frequency Fourier Transform Profilometry for the 3-D Shape Measurement of Object with Discontinuous Height Steps

    Institute of Scientific and Technical Information of China (English)

    ZHONG Jingang; DI Hongwei; ZHANG Yonglin

    2000-01-01

    The combination of shearing interferometer, Fourier-transform profilometry, phase unwrapping by lookup table method has resulted in a new and more powerful method of measuring surface profile. The technique permits the three-dimensional shape measurement of objects that have discontinuous height steps. Experimental results have demonstrated the validity of the principle.

  17. Instantaneous 3D EEG Signal Analysis Based on Empirical Mode Decomposition and the Hilbert–Huang Transform Applied to Depth of Anaesthesia

    Directory of Open Access Journals (Sweden)

    Mu-Tzu Shih

    2015-02-01

    Full Text Available Depth of anaesthesia (DoA is an important measure for assessing the degree to which the central nervous system of a patient is depressed by a general anaesthetic agent, depending on the potency and concentration with which anaesthesia is administered during surgery. We can monitor the DoA by observing the patient’s electroencephalography (EEG signals during the surgical procedure. Typically high frequency EEG signals indicates the patient is conscious, while low frequency signals mean the patient is in a general anaesthetic state. If the anaesthetist is able to observe the instantaneous frequency changes of the patient’s EEG signals during surgery this can help to better regulate and monitor DoA, reducing surgical and post-operative risks. This paper describes an approach towards the development of a 3D real-time visualization application which can show the instantaneous frequency and instantaneous amplitude of EEG simultaneously by using empirical mode decomposition (EMD and the Hilbert–Huang transform (HHT. HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMFs. The Hilbert spectral analysis method is then used to obtain instantaneous frequency data. The HHT provides a new method of analyzing non-stationary and nonlinear time series data. We investigate this approach by analyzing EEG data collected from patients undergoing surgical procedures. The results show that the EEG differences between three distinct surgical stages computed by using sample entropy (SampEn are consistent with the expected differences between these stages based on the bispectral index (BIS, which has been shown to be quantifiable measure of the effect of anaesthetics on the central nervous system. Also, the proposed filtering approach is more effective compared to the standard filtering method in filtering out signal noise resulting in more consistent results than those provided by the BIS. The proposed approach is therefore

  18. An in-depth spectroscopic examination of molecular bands from 3D hydrodynamical model atmospheres I. Formation of the G-band in metal-poor dwarf stars

    CERN Document Server

    Gallagher, A J; Bonifacio, P; Ludwig, H -G; Steffen, M; Spite, M

    2016-01-01

    Recent developments in the three-dimensional (3D) spectral synthesis code Linfor3D have meant that, for the first time, large spectral wavelength regions, such as molecular bands, can be synthesised with it in a short amount of time. A detailed spectral analysis of the synthetic G-band for several dwarf turn-off-type 3D atmospheres (5850 <= T_eff [K] <= 6550, 4.0 <= log g <= 4.5, -3.0 <= [Fe/H] <= -1.0) was conducted, under the assumption of local thermodynamic equilibrium. We also examine carbon and oxygen molecule formation at various metallicity regimes and discuss the impact it has on the G-band. Using a qualitative approach, we describe the different behaviours between the 3D atmospheres and the traditional one-dimensional (1D) atmospheres and how the different physics involved inevitably leads to abundance corrections, which differ over varying metallicities. Spectra computed in 1D were fit to every 3D spectrum to determine the 3D abundance correction. Early analysis revealed that the ...

  19. 3D multi-object segmentation of cardiac MSCT imaging by using a multi-agent approach.

    Science.gov (United States)

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernández, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed.

  20. An object-oriented 3D nodal finite element solver for neutron transport calculations in the Descartes project

    Energy Technology Data Exchange (ETDEWEB)

    Akherraz, B.; Lautard, J.J. [CEA Saclay, Dept. Modelisation de Systemes et Structures, Serv. d' Etudes des Reacteurs et de Modelisation Avancee (DMSS/SERMA), 91 - Gif sur Yvette (France); Erhard, P. [Electricite de France (EDF), Dir. de Recherche et Developpement, Dept. Sinetics, 92 - Clamart (France)

    2003-07-01

    In this paper we present two applications of the Nodal finite elements developed by Hennart and del Valle, first to three-dimensional Cartesian meshes and then to two-dimensional Hexagonal meshes. This work has been achieved within the framework of the DESCARTES project, which is a co-development effort by the 'Commissariat a l'Energie Atomique' (CEA) and 'Electricite de France' (EDF) for the development of a toolbox for reactor core calculations based on object oriented programming. The general structure of this project is based on the object oriented method. By using a mapping technique proposed in Schneider's thesis and del Valle, Mund, we show how this structuration allows us an easy implementation of the hexagonal case from the Cartesian case. The main attractiveness of this methodology is the possibility of a pin-by-pin representation by division of each lozenge into smaller ones. Furthermore, we will explore the use of non structured quadrangles to treat the circular geometry within a hexagon. It remains nevertheless, in the hexagonal case, the implementation of the acceleration of the internal iterations by the DSA (Diffusion Synthetic Acceleration) or the TSA. (authors)

  1. 3D reconstruction of feature point on object surface from a single image%特征光斑单目视觉空间定位方法

    Institute of Scientific and Technical Information of China (English)

    霍炬; 仲小清; 杨明

    2011-01-01

    For the design and development of a vision sensor for the ground test,a method of 3D reconstruction of feature point on large scale object surface from a single image is proposed.By establishing the math model of the 3D coordinate of feature point on the object surface and using the spatial ray through feature point,the 3D coordinate of the feature point can be determined using single image.According to the characteristics,the object surface can be classified into three types: high order surface type,block plane type and block surface type,while the corresponding location methods are introduced.The accuracy of three different 3D reconstruction methods is compared by simulation experiments.By the measurement precision of 1/10 000 in the range of 8 000 mm×8 000 mm,it is proved that the proposed method is suitable for 3D reconstruction of feature point on large scale object surface.%为了满足地面试验中运动目标位姿参数视觉测量系统的研制需求,基于单目视觉原理设计了大视场条件下实际平面上特征光斑的精确空间定位方法.建立了特征光斑的成像模型,给出了实际平面对特征光斑空间位置的约束,进而提出了特征光斑的单目视觉空间定位方法,并依据实际平面的特点设计了高阶曲面型、分块平面型和分块曲面型等三种类型实际平面上特征光斑空间定位的具体实现方法.所提方法可以满足8 000 mm×8 000 mm视场范围内特征光斑的空间定位需求,相对定位误差小于1/10 000.

  2. 基于CV/CAD的三维物体几何建模%CV/CAD Based 3D Object Geometric Modeling

    Institute of Scientific and Technical Information of China (English)

    邓世伟; 袁保宗

    2001-01-01

    In areas such as virtual reality,it is often needed to establish virtual scene in computer from actual scene in real world.In this paper,a technical approach for realizing geometric modeling of 3D object is proposed,which combines computer vision and CAD geometric modeling.The range images of 3D objects are obtained by using the encoded light stripe patterns,then are segmented by our range image segmentation method based on the basic operations of methematical morphology. The meaningful regions obtained by range image segmentation correspond to the surface patches of 3D object.The 3D surface patches are then reconstructed by the algebraic surface fitting method;the surface parameters are estimated by solving generalized eigenvector problem. The geometric model of 3D object is constructed from reconstructed surface patches by using CAD geometric modeling tool GEOMOD.The primary experimental results of two mechanical parts are presented,which prove the proposed approach is feasible.%在虚拟现实等技术领域中,都涉及到由现实世界中的实际景物建立对应的计算机描述的虚拟景物的问题,为此提出了利用计算机视觉与CAD几何建模技术相结合的三维物体建模途径.首先通过编码光栅方法获取三维物体的深度图象,并采用数学形态学的方法加以分割,然后利用代数曲面拟合手段对分割后的三维曲面片进行重建,并使用CAD几何建模工具由重建的曲面片构成物体的几何模型.该文给出了初步的试验结果,证明所提出的技术途径基本可行.

  3. Delaunay-Object-Dynamics: cell mechanics with a 3D kinetic and dynamic weighted Delaunay-triangulation.

    Science.gov (United States)

    Meyer-Hermann, Michael

    2008-01-01

    Mathematical methods in Biology are of increasing relevance for understanding the control and the dynamics of biological systems with medical relevance. In particular, agent-based methods turn more and more important because of fast increasing computational power which makes even large systems accessible. An overview of different mathematical methods used in Theoretical Biology is provided and a novel agent-based method for cell mechanics based on Delaunay-triangulations and Voronoi-tessellations is explained in more detail: The Delaunay-Object-Dynamics method. It is claimed that the model combines physically realistic cell mechanics with a reasonable computational load. The power of the approach is illustrated with two examples, avascular tumor growth and genesis of lymphoid tissue in a cell-flow equilibrium.

  4. The effect of monocular depth cues on the detection of moving objects by moving observers.

    Science.gov (United States)

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-07-01

    An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.

  5. 3D Spectroscopy of Local Luminous Compact Blue Galaxies: Kinematic Maps of a Sample of 22 Objects

    CERN Document Server

    Pérez-Gallego, J; Castillo-Morales, A; Gallego, J; Castander, F J; Garland, C A; Gruel, N; Pisano, D J; Zamorano, J

    2011-01-01

    We use three dimensional optical spectroscopy observations of a sample of 22 local Luminous Compact Blue Galaxies (LCBGs) to create kinematic maps. By means of these, we classify the kinematics of these galaxies into three different classes: rotating disk (RD), perturbed rotation (PR), and complex kinematics (CK). We find 48% are RDs, 28% are PRs, and 24% are CKs. RDs show rotational velocities that range between $\\sim50$ and $\\sim200 km s^{-1}$, and dynamical masses that range between $\\sim1\\times10^{9}$ and $\\sim3\\times10^{10} M_{\\odot}$. We also address the following two fundamental questions through the study of the kinematic maps: \\emph{(i) What processes are triggering the current starbust in LCBGs?} We search our maps of the galaxy velocity fields for signatures of recent interactions and close companions that may be responsible for the enhanced star formation in our sample. We find 5% of objects show evidence of a recent major merger, 10% of a minor merger, and 45% of a companion. This argues in favor...

  6. How 3-D Movies Work

    Institute of Scientific and Technical Information of China (English)

    吕铁雄

    2011-01-01

    难度:★★★★☆词数:450 建议阅读时间:8分钟 Most people see out of two eyes. This is a basic fact of humanity,but it’s what makes possible the illusion of depth(纵深幻觉) that 3-D movies create. Human eyes are spaced about two inches apart, meaning that each eye gives the brain a slightly different perspective(透视感)on the same object. The brain then uses this variance to quickly determine an object’s distance.

  7. Joint spatial-depth feature pooling for RGB-D object classification

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    it for the improvement of robustness and discriminability of the feature representation by merging depth cues into feature pooling. Spatial pyramid model (SPM) has become the standard protocol to split 2D image plane into sub-regions for feature pooling in RGB-D object classification. We argue that SPM may...... not be the optimal pooling scheme for RGB-D images, as it only pools features spatially and completely discards the depth topological information. Instead, we propose a novel joint spatial-depth pooling scheme (JSDP) which further partitions SPM using the depth cue and pools features simultaneous in 2D image plane...

  8. Contextual effects of scene on the visual perception of object orientation in depth.

    Directory of Open Access Journals (Sweden)

    Ryosuke Niimi

    Full Text Available We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1. When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2. This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3. Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.

  9. Object-based 3D geomodel with multiple constraints for early Pliocene fan delta in the south of Lake Albert Basin, Uganda

    Science.gov (United States)

    Wei, Xu; Lei, Fang; Xinye, Zhang; Pengfei, Wang; Xiaoli, Yang; Xipu, Yang; Jun, Liu

    2017-01-01

    The early Pliocene fan delta complex developed in the south of Lake Albert Basin which is located at the northern end of the western branch in the East African Rift System. The stratigraphy of this succession is composed of distributary channels, overbank, mouthbar and lacustrine shales. Limited by the poor seismic quality and few wells, it is full of challenge to delineate the distribution area and patterns of reservoir sands. Sedimentary forward simulation and basin analogue were applied to analyze the spatial distribution of facies configuration and then a conceptual sedimentary model was constructed by combining with core, heavy mineral and palynology evidences. A 3D geological model of a 120 m thick stratigraphic succession was built using well logs and seismic surfaces based on the established sedimentary model. The facies modeling followed a hierarchical object-based approach conditioned to multiple trend constraints like channel intensity, channel azimuth and channel width. Lacustrine shales were modeled as background facies and then in turn eroded by distribute channels, overbank and mouthbar respectively. At the same time a body facies parameter was created to indicate the connectivity of the reservoir sands. The resultant 3D facies distributions showed that the distributary channels flowed from east bounding fault to west flank and overbank was adhered to the fringe of channels while mouthbar located at the end of channels. Furthermore, porosity and permeability were modeled using sequential Gaussian simulation (SGS) honoring core observations and petrophysical interpretation results. Despite the poor seismic is not supported to give enough information for fan delta sand distribution, creating a truly representative 3D geomodel is still able to be achieved. This paper highlights the integration of various data and comprehensive steps of building a consistent representative 3D geocellular fan delta model used for numeral simulation studies and field

  10. Objective, comparative assessment of the penetration depth of temporal-focusing microscopy for imaging various organs

    Science.gov (United States)

    Rowlands, Christopher J.; Bruns, Oliver T.; Bawendi, Moungi G.; So, Peter T. C.

    2015-06-01

    Temporal focusing is a technique for performing axially resolved widefield multiphoton microscopy with a large field of view. Despite significant advantages over conventional point-scanning multiphoton microscopy in terms of imaging speed, the need to collect the whole image simultaneously means that it is expected to achieve a lower penetration depth in common biological samples compared to point-scanning. We assess the penetration depth using a rigorous objective criterion based on the modulation transfer function, comparing it to point-scanning multiphoton microscopy. Measurements are performed in a variety of mouse organs in order to provide practical guidance as to the achievable penetration depth for both imaging techniques. It is found that two-photon scanning microscopy has approximately twice the penetration depth of temporal-focusing microscopy, and that penetration depth is organ-specific; the heart has the lowest penetration depth, followed by the liver, lungs, and kidneys, then the spleen, and finally white adipose tissue.

  11. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    Science.gov (United States)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  12. Infant manual performance during reaching and grasping for objects moving in depth

    Directory of Open Access Journals (Sweden)

    Erik eDomellöf

    2015-08-01

    Full Text Available Few studies have observed investigated manual asymmetries performance in infants when reaching and grasping for objects moving in directions other than across the fronto-parallel plane. The present preliminary study explored manual object-oriented behavioral strategies and hand side preference in 8- and 10-month-old infants during reaching and grasping for objects approaching in depth from three positions (midline, and 27° diagonally from the left, and right, midline. Effects of task constraint by using objects of three different types and two sizes were further examined for behavioral strategies and . The study also involved measurements of hand position opening prior to grasping., and Additionally, assessments of general hand preference by a dedicated handedness test were performed. Regardless of object starting position, the 8-month-old infants predominantly displayed right-handed reaches for objects approaching in depth. In contrast, the older infants showed more varied strategies and performed more ipsilateral reaches in correspondence with the side of the approaching object. Conversely, 10-month-old infants were more successful than the younger infants in grasping the objects, independent of object starting position. The findings support the possibility of a shared underlying mechanism regarding for infant hand use strategies when reaching and grasping for horizontally objects moving in depth are similar to those from earlier studies using objects moving along a horizontal pathand vertically moving objects. Still, initiation times of reaching onset were generally long in the present study, indicating that the object motion paths seemingly affected how the infants perceived the intrinsic properties and spatial locations of the objects, possibly with an effect on motor planning. Findings are further discussed in relation to future investigations of infant reaching and grasping for objects approaching in depth.

  13. Development of monocular and binocular multi-focus 3D display systems using LEDs

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Son, Jung-Young; Kwon, Yong-Moo

    2008-04-01

    Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.

  14. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    Science.gov (United States)

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.

  15. Combined robotic-aided gait training and 3D gait analysis provide objective treatment and assessment of gait in children and adolescents with Acquired Hemiplegia.

    Science.gov (United States)

    Molteni, Erika; Beretta, Elena; Altomonte, Daniele; Formica, Francesca; Strazzer, Sandra

    2015-08-01

    To evaluate the feasibility of a fully objective rehabilitative and assessment process of the gait abilities in children suffering from Acquired Hemiplegia (AH), we studied the combined employment of robotic-aided gait training (RAGT) and 3D-Gait Analysis (GA). A group of 12 patients with AH underwent 20 sessions of RAGT in addition to traditional manual physical therapy (PT). All the patients were evaluated before and after the training by using the Gross Motor Function Measures (GMFM), the Functional Assessment Questionnaire (FAQ), and the 6 Minutes Walk Test. They also received GA before and after RAGT+PT. Finally, results were compared with those obtained from a control group of 3 AH children who underwent PT only. After the training, the GMFM and FAQ showed significant improvement in patients receiving RAGT+PT. GA highlighted significant improvement in stance symmetry and step length of the affected limb. Moreover, pelvic tilt increased, and hip kinematics on the sagittal plane revealed statistically significant increase in the range of motion during the hip flex-extension. Our data suggest that the combined program RAGT+PT induces improvements in functional activities and gait pattern in children with AH, and it demonstrates that the combined employment of RAGT and 3D-GA ensures a fully objective rehabilitative program.

  16. Infant manual performance during reaching and grasping for objects moving in depth.

    Science.gov (United States)

    Domellöf, Erik; Barbu-Roth, Marianne; Rönnqvist, Louise; Jacquet, Anne-Yvonne; Fagard, Jacqueline

    2015-01-01

    Few studies have investigated manual performance in infants when reaching and grasping for objects moving in directions other than across the fronto-parallel plane. The present preliminary study explored object-oriented behavioral strategies and side preference in 8- and 10-month-old infants during reaching and grasping for objects approaching in depth from three positions (midline, and 27° diagonally from the left and right). Effects of task constraint by using objects of three different types and two sizes were further examined for behavioral strategies and hand opening prior to grasping. Additionally, assessments of hand preference by a dedicated handedness test were performed. Regardless of object starting position, the 8-month-old infants predominantly displayed right-handed reaches for objects approaching in depth. In contrast, the older infants showed more varied strategies and performed more ipsilateral reaches in correspondence with the side of the approaching object. Conversely, 10-month-old infants were more successful than the younger infants in grasping the objects, independent of object starting position. The findings regarding infant hand use strategies when reaching and grasping for objects moving in depth are similar to those from earlier studies using objects moving along a horizontal path. Still, initiation times of reaching onset were generally long in the present study, indicating that the object motion paths seemingly affected how the infants perceived the intrinsic properties and spatial locations of the objects, possibly with an effect on motor planning. Findings are further discussed in relation to future investigations of infant reaching and grasping for objects approaching in depth.

  17. Depth

    NARCIS (Netherlands)

    Koenderink, J.J.; Van Doorn, A.J.; Wagemans, J.

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the f

  18. Depth

    NARCIS (Netherlands)

    Koenderink, J.J.; Van Doorn, A.J.; Wagemans, J.

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the

  19. Two Accelerating Techniques for 3D Reconstruction

    Institute of Scientific and Technical Information of China (English)

    刘世霞; 胡事民; 孙家广

    2002-01-01

    Automatic reconstruction of 3D objects from 2D orthographic views has been a major research issue in CAD/CAM. In this paper, two accelerating techniques to improve the efficiency of reconstruction are presented. First, some pseudo elements are removed by depth and topology information as soon as the wire-frame is constructed, which reduces the searching space. Second, the proposed algorithm does not establish all possible surfaces in the process of generating 3D faces. The surfaces and edge loops are generated by using the relationship between the boundaries of 3D faces and their projections. This avoids the growth in combinational complexity of previous methods that have to check all possible pairs of 3D candidate edges.

  20. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  1. Development of three types of multifocus 3D display

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong Wook

    2011-06-01

    Three types of multi-focus(MF) 3D display are developed and possibility about monocular depth cue is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed 3D display system for each eye, which can satisfy accommodation to displayed virtual objects within defined depth. The first MF 3D display is developed via laser scanning method, the second MF 3D display uses LED array for light source, and the third MF 3D display uses slated LED array for full parallax monocular depth cue. The full parallax MF 3D display system gives omnidirectional focus effect. The proposed 3D display systems have a possibility of solving eye fatigue problem that comes from the mismatch between the accommodation of each eye and the convergence of two eyes. The monocular accommodation is tested and a proof of the satisfaction of the full parallax accommodation is given as a result of the proposed full parallax MF 3D display system. We achieved a result that omni-directional focus adjustment is possible via parallax images.

  2. Autostereoscopic 3D display system on the properties of both the expanded depth directional viewing zone and the removed structural crosstalk

    Science.gov (United States)

    Lee, Kwang-Hoon; Park, Anjin; Lee, Dong-Kil; Kim, Yang-Gyu; Jang, Wongun; Park, Youngsik

    2014-06-01

    To expand the suitable stereoscopic viewing zone on depth directional and remove the crosstalk induced by the structures of the existing slanted lenticular lens sheet, Segmented Lenticular lens having Varying Optical Power (SL-VOP) is proposed.

  3. Effectiveness of Occluded Object Representations at Displaying Ordinal Depth Information in Augmented Reality

    Science.gov (United States)

    2013-03-01

    Displaying Ordinal Depth Information in Augmented Reality 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Effectiveness of Occluded Object Representations at Displaying Ordinal Depth Information in Augmented Reality Mark A. Livingston∗ Naval Research Laboratory...effectively impossible with all icon styles, whereas in the case of partial overlap, the Ground Plane had a clear advantage. Keywords: Augmented reality , human

  4. 3D printing for dummies

    CERN Document Server

    Hausman, Kalani Kirk

    2014-01-01

    Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for

  5. Method for the determination of the modulation transfer function (MTF) in 3D x-ray imaging systems with focus on correction for finite extent of test objects

    Science.gov (United States)

    Schäfer, Dirk; Wiegert, Jens; Bertram, Matthias

    2007-03-01

    It is well known that rotational C-arm systems are capable of providing 3D tomographic X-ray images with much higher spatial resolution than conventional CT systems. Using flat X-ray detectors, the pixel size of the detector typically is in the range of the size of the test objects. Therefore, the finite extent of the "point" source cannot be neglected for the determination of the MTF. A practical algorithm has been developed that includes bias estimation and subtraction, averaging in the spatial domain, and correction for the frequency content of the imaged bead or wire. Using this algorithm, the wire and the bead method are analyzed for flat detector based 3D X-ray systems with the use of standard CT performance phantoms. Results on both experimental and simulated data are presented. It is found that the approximation of applying the analysis of the wire method to a bead measurement is justified within 3% accuracy up to the first zero of the MTF.

  6. 3D Projection Installations

    DEFF Research Database (Denmark)

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article...... contributes to the understanding of the distinctive characteristics of such a new medium, and identifies three strategies for designing 3-D projection installations: establishing space; interplay between the digital and the physical; and transformation of materiality. The principal empirical case, From...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  7. Perceptual atoms: proximal motion vector-structures and the perception of object motion in depth

    Directory of Open Access Journals (Sweden)

    Hershenson Maurice

    2003-01-01

    Full Text Available A framework is proposed for analyzing the perception of motion in depth produced by simple proximal motion patterns of two to four points. The framework includes input structure, perceptual system constraints, and a depth scaling mechanism. The input is relational stimulation described by two proximal dimensions, orientation and separation, that can change or remain constant over the course of a motion pattern. Combinations of change or no-change in these dimensions yield four basic patterns of proximal stimulation: parallel, circular, perspective, and parallax. These primary patterns initiate automatic processing mechanisms - a unity constraint that treats pairs of points as connected and a rigidity constraint that treats the connection as rigid. When the constraints are activated by perspective or parallax patterns, the rigid connection between the points also appears to move in depth. A scaling mechanism governs the degree to which the objects move in depth in order to maintain the perceived rigidity. Although this framework is sufficient to explain perceptions produced by three- and four-point motion patterns in most cases, some patterns require additional configurational factors to supplement the framework. Nevertheless, perceptual qualities such as shrinking, stretching, bending, and folding emerge from the application of the same processing constraints and depth scaling factors as those that produce the perception of rigid objects moving in depth.

  8. Depth position detection for fast moving objects in sealed microchannel utilizing chromatic aberration.

    Science.gov (United States)

    Lin, Che-Hsin; Su, Shin-Yu

    2016-01-01

    This research reports a novel method for depth position measurement of fast moving objects inside a microfluidic channel based on the chromatic aberration effect. Two band pass filters and two avalanche photodiodes (APD) are used for rapid detecting the scattered light from the passing objected. Chromatic aberration results in the lights of different wavelengths focus at different depth positions in a microchannel. The intensity ratio of two selected bands of 430 nm-470 nm (blue band) and 630 nm-670 nm (red band) scattered from the passing object becomes a significant index for the depth information of the passing object. Results show that microspheres with the size of 20 μm and 2 μm can be resolved while using PMMA (Abbe number, V = 52) and BK7 (V = 64) as the chromatic aberration lens, respectively. The throughput of the developed system is greatly enhanced by the high sensitive APDs as the optical detectors. Human erythrocytes are also successfully detected without fluorescence labeling at a high flow velocity of 2.8 mm/s. With this approach, quantitative measurement for the depth position of rapid moving objects inside a sealed microfluidic channel can be achieved in a simple and low cost way.

  9. Studies of 3D-cloud optical depth from small to very large values, and of the radiation and remote sensing impacts of larger-drop clustering

    Energy Technology Data Exchange (ETDEWEB)

    Wiscombe, Warren [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Marshak, Alexander [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Knyazikhin, Yuri [Boston Univ., MA (United States); Chiu, Christine [Univ. of Maryland Baltimore County (UMBC), Baltimore, MD (United States)

    2007-05-04

    We have basically completed all the goals stated in the previous proposal and published or submitted journal papers thereon, the only exception being First-Principles Monte Carlo which has taken more time than expected. We finally finished the comprehensive book on 3D cloud radiative transfer (edited by Marshak and Davis and published by Springer), with many contributions by ARM scientists; this book was highlighted in the 2005 ARM Annual Report. We have also completed (for now) our pioneering work on new models of cloud drop clustering based on ARM aircraft FSSP data, with applications both to radiative transfer and to rainfall. This clustering work was highlighted in the FY07 “Our Changing Planet” (annual report of the US Climate Change Science Program). Our group published 22 papers, one book, and 5 chapters in that book, during this proposal period. All are listed at the end of this section. Below, we give brief highlights of some of those papers.

  10. 3D photoacoustic imaging

    Science.gov (United States)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  11. 3D Wire 2015

    DEFF Research Database (Denmark)

    Jordi, Moréton; F, Escribano; J. L., Farias

    This document is a general report on the implementation of gamification in 3D Wire 2015 event. As the second gamification experience in this event, we have delved deeply in the previous objectives (attracting public areas less frequented exhibition in previous years and enhance networking) and ha......, improves socialization and networking, improves media impact, improves fun factor and improves encouragement of the production team....

  12. Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition

    DEFF Research Database (Denmark)

    Aakerberg, Andreas; Nasrollahi, Kamal

    2017-01-01

    of an existing deeplearning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we showthat encoding the depth values as colorized surface normals is beneficial, when the model is initialized withweights learned from training on ImageNet data. Additionally, we show...

  13. Auto convergence for stereoscopic 3D cameras

    Science.gov (United States)

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  14. 3D integral imaging with optical processing

    Science.gov (United States)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  15. Human Object Recognition Using Colour and Depth Information from an RGB-D Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Benjamin John Southwell

    2013-03-01

    Full Text Available Human object recognition and tracking is important in robotics and automation. The Kinect sensor and its SDK have provided a reliable human tracking solution where a constant line of sight is maintained. However, if the human object is lost from sight during the tracking, the existing method cannot recover and resume tracking the previous object correctly. In this paper, a human recognition method is developed based on colour and depth information that is provided from any RGB-D sensor. In particular, the method firstly introduces a mask based on the depth information of the sensor to segment the shirt from the image (shirt segmentation; it then extracts the colour information of the shirt for recognition (shirt recognition. As the shirt segmentation is only based on depth information, it is light invariant compared to colour-based segmentation methods. The proposed colour recognition method introduces a confidence-based ruling method to classify matches. The proposed shirt segmentation and colour recognition method is tested using a variety of shirts with the tracked human at standstill or moving in varying lighting conditions. Experiments show that the method can recognize shirts of varying colours and patterns robustly.

  16. 3D passive integral imaging using compressive sensing.

    Science.gov (United States)

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

    2012-11-19

    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  17. 3D vision system assessment

    Science.gov (United States)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  18. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    Science.gov (United States)

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  19. 3D and beyond

    Science.gov (United States)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  20. 3D sensitivity of 6-electrode Focused Impedance Method (FIM)

    Science.gov (United States)

    Masum Iquebal, A. H.; Siddique-e Rabbani, K.

    2010-04-01

    The present work was taken up to have an understanding of the depth sensitivity of the 6 electrode FIM developed by our laboratory earlier, so that it may be applied judiciously for the measurement of organs in 3D, with electrodes on the skin surface. For a fixed electrode geometry sensitivity is expected to depend on the depth, size and conductivity of the target object. With current electrodes 18 cm apart and potential electrodes 5 cm apart, depth sensitivity of spherical conductors, insulators and of pieces of potato of different diameters were measured. The sensitivity dropped sharply with depth gradually leveling off to background, and objects could be sensed down to a depth of about twice their diameters. The sensitivity at a certain depth increases almost linearly with volume for objects with the same conductivity. Thus these results increase confidence in the use of FIM for studying organs at depths of the body.

  1. Design of objective lenses to extend the depth of field based on wavefront coding

    Science.gov (United States)

    Zhao, Tingyu; Ye, Zi; Zhang, Wenzi; Huang, Weiwei; Yu, Feihong

    2008-03-01

    Wavefront coding extended the depth of field to a great extent with simpler structure compared to confocal microscope. With cubic phase mask (CPM) employed in the STOP of the objective lens, blurred images will be obtained in charge coupled device (CCD), which will be restored to sharp images by Wiener filter. We proposed that one CPM is used in one microscope although there are different objective lenses with different power indices. The microscope proposed here is the wavefront coding one when the CPM is used in the STOP; while it is the traditional one when a plane plate is used in the STOP. Firstly, make the STOP in the last surface of the lens, and then add a plane plate at the STOP with the same material and the same center thickness of the CPM. Traditional objective lenses are designed, based on which wavefront coding system will be designed with the plane plate replaced by a CPM. Secondly, the parameters of CPMs in different objective lenses are optimized to certain ranges based on metric function of stability of modulation transfer function (MTF). The optimal parameter is chosen from these ranges. A set of objective lenses is designed as an example with one CPM. The simulation results shows that the depth of field of 4X, 10X, 40X, 60X and 100X objective lenses with the same CPM can reach to 400um, 40um, 24um, 16um and 2um respectively, which is much larger than 55.5um, 8.5um, 1um, 0.4um and 0.19um of the traditional ones.

  2. Depth-Aware Salient Object Detection and Segmentation via Multiscale Discriminative Saliency Fusion and Bootstrap Learning.

    Science.gov (United States)

    Song, Hangke; Liu, Zhi; Du, Huan; Sun, Guangling; Le Meur, Olivier; Ren, Tongwei

    2017-09-01

    This paper proposes a novel depth-aware salient object detection and segmentation framework via multiscale discriminative saliency fusion (MDSF) and bootstrap learning for RGBD images (RGB color images with corresponding Depth maps) and stereoscopic images. By exploiting low-level feature contrasts, mid-level feature weighted factors and high-level location priors, various saliency measures on four classes of features are calculated based on multiscale region segmentation. A random forest regressor is learned to perform the discriminative saliency fusion (DSF) and generate the DSF saliency map at each scale, and DSF saliency maps across multiple scales are combined to produce the MDSF saliency map. Furthermore, we propose an effective bootstrap learning-based salient object segmentation method, which is bootstrapped with samples based on the MDSF saliency map and learns multiple kernel support vector machines. Experimental results on two large datasets show how various categories of features contribute to the saliency detection performance and demonstrate that the proposed framework achieves the better performance on both saliency detection and salient object segmentation.

  3. An analysis of TA-Student Interaction and the Development of Concepts in 3-d Space Through Language, Objects, and Gesture in a College-level Geoscience Laboratory

    Science.gov (United States)

    King, S. L.

    2015-12-01

    The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural

  4. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  5. Fast joint texture depth algorithm based on edge-modeling for 3D-HEVC coding%基于边缘建模的纹理深度联合快速三维编码

    Institute of Scientific and Technical Information of China (English)

    斯晓华; 王国中; 李国平; 赵海武; 滕国伟

    2016-01-01

    3D-HEVC introduces interview and intercomponent coding tools for efficient coding of dependent views and depth data, meanwhile it utilizes the quadtree-based coding structure and a complex Rate Distortion Optimization(RDO) process to determine the best coding mode and partition size, which is performed in HEVC for both the texture and depth components. Although the encoding efficiency is improved significantly, the computational complexity is high. In fact, it’s unnecessary to adopt the“try all and find the best”method in the flat area for texture and depth views, and the percentage of flat area in the depth even can be 85%. In order to reduce the computational complexity, a fast 3D-HEVC algorithm is proposed based on the edge-modeling and the correlation between texture and depth. The proposed method divides the coding blocks into flat blocks and edge-contained blocks. Then only the first level of the quadtree and simplest PU partition size are tested. Some of these redundant R-D checks are skipped for the edge-contained blocks based on the edge’s direc-tion. The experiments verify that the proposed algorithm can reduce the computational complexity by 59.0%compared with the original 3D-HEVC encoder under RA test case, with only 3.8%BD-rate increase in the process of view synthesis.%HEVC的三维视频编码扩展部分(3D-HEVC)引入了视间预测和纹理深度联合预测的工具对相关性较强的纹理、深度视点进行高效编码,同时它沿用了HEVC中的四叉树编码结构和率失真优化来选择最优预测模式和块划分模式,这样导致计算量很大。在3D视频中,纹理图像、深度图像中存在大片较为平坦的纹理区域,深度图像中的平坦区域占比更是达到85%,然而对于平坦区域进行计算量极大的率失真过程是冗余的,针对此问题提出一种基于边缘建模的纹理深度联合快速编码算法。该方法对纹理图像以及深度图像进行边缘建模,

  6. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    Science.gov (United States)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  7. Radar Plant and Measurement Technique for Determination of the Orientation and the Depth of Buried Objects

    DEFF Research Database (Denmark)

    1999-01-01

    A plant for generation of information indicative of the depth and the orientation of an object positioned below the surface of the ground is adapted to use electromagnetic radiation emitted from and received by an antenna system associated with the plant. The plant has a transmitter and a receiver...... for generation of the electromagnetic radiation in cooperation with the antenna system mentioned and for reception of the electromagnetic radiation reflected by the object in cooperation with the antenna system, respectively. The antenna system includes a plurality of individual antenna elements such as dipole...... the antenna system and thus polarizing the electromagnetic field around or in relation to the geometric center of the antenna system....

  8. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  9. 3D Animation Essentials

    CERN Document Server

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  10. Determining next best view based on occlusion information in a single depth image of visual object

    Directory of Open Access Journals (Sweden)

    Shihui Zhang

    2016-12-01

    Full Text Available How to determine the camera’s next best view is a challenging problem in vision field. A next best view approach is proposed based on occlusion information in a single depth image. First, the occlusion detection is accomplished for the depth image of visual object in current view to obtain the occlusion boundary and the nether adjacent boundary. Second, the external surface of occluded region is constructed and modeled according to the occlusion boundary and the nether adjacent boundary. Third, the observation direction, observation center point, and area information of external surface of occluded region are solved. And then, the set of candidate observation directions and the visual space of each candidate direction are determined. Finally, the next best view is achieved by solving the next best observation direction and camera’s observation position. The proposed approach does not need the prior knowledge of visual object or limit the camera position on a specially appointed surface. Experimental results demonstrate that the approach is feasible and effective.

  11. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    Science.gov (United States)

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  12. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    Science.gov (United States)

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  13. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  14. Automatic detection of artifacts in converted S3D video

    Science.gov (United States)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  15. G.O.THERM.3D - Providing a 3D Atlas of Temperature in Ireland's Subsurface

    Science.gov (United States)

    Farrell, Thomas; Fullea, Javier

    2017-04-01

    We introduce the recently initiated project G.O.THERM.3D, which aims to develop a robust and unique model of temperature within Ireland's crust and to produce a 3D temperature atlas of the country. The temperature model will be made publicly available on an interactive online platform, and the project findings will be reported to appropriate state energy and geoscience bodies. The project objective is that an interactive, publicly available 3D temperature model will increase public awareness of geothermal energy. The aim is also that the project findings will focus and encourage geothermal resource exploration and will assist in the development of public policy on geothermal energy exploration, mapping, planning and exploitation. Previous maps of temperature at depth in Ireland's subsurface are heavily reliant on temperature observations in geographically-clustered, shallow boreholes. These maps also make insufficient allowance for near-surface perturbation effects (such as the palaeoclimatic effect), do not allow for the 3D variation of petrophysical parameters and do not consider the deep, lithospheric thermal structure. To develop a 3D temperature model of Ireland's crust, G.O.THERM.3D proposes to model both the compositional and thermal structure of the Irish crust using the LitMod3D geophysical-petrological modelling tool. LitMod3D uses an integrated approach that simultaneously accounts for multiple geophysical (heat-flow, gravity, topography, magnetotelluric, seismic) and petrological (thermal conductivity, heat-production, xenolith composition) datasets, where the main rock properties (density, electrical resistivity, seismic velocity) are thermodynamically computed based on the temperature and bulk rock composition. LitMod3D has been applied to study the lithosphere-asthenosphere boundary (LAB) beneath Ireland (at a depth of 100 km) and is typically used to investigate lithospheric-scale structures. In the previous studies focussing on the LAB beneath

  16. The effect of aberrations on objectively assessed image quality and depth of focus.

    Science.gov (United States)

    Águila-Carrasco, Antonio J Del; Read, Scott A; Montés-Micó, Robert; Iskander, D Robert

    2017-02-01

    The effects of aberrations on image quality and the objectively assessed depth of focus (DoF) were studied. Aberrometry data from 80 young subjects with a range of refractive errors was used for computing the visual Strehl ratio based on the optical transfer function (VSOTF), and then, through-focus simulations were performed in order to calculate the objective DoF (using two different relative thresholds of 50% and 80%; and two different pupil diameters) and the image quality (the peak VSOTF). Both lower order astigmatism and higher order aberration (HOA) terms up to the fifth radial order were considered. The results revealed that, of the HOAs, the comatic terms (third and fifth order) explained most of the variations of the DoF and the image quality in this population of subjects. Furthermore, computer simulations demonstrated that the removal of these terms also had a significant impact on both DoF and the peak VSOTF. Knowledge about the relationship between aberrations, DoF, image quality, and their interactions is essential in optical designs aiming to produce large values of DoF while maintaining an acceptable level of image quality. Comatic aberration terms appear to contribute strongly towards the configuration of both of these visually important parameters.

  17. A Comparison of the Effects of Depth Rotation on Visual and Haptic Three-Dimensional Object Recognition

    Science.gov (United States)

    Lawson, Rebecca

    2009-01-01

    A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a…

  18. Color 3D Reverse Engineering

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper presents a principle and a method of col or 3D laser scanning measurement. Based on the fundamental monochrome 3D measureme nt study, color information capture, color texture mapping, coordinate computati on and other techniques are performed to achieve color 3D measurement. The syste m is designed and composed of a line laser light emitter, one color CCD camera, a motor-driven rotary filter, a circuit card and a computer. Two steps in captu ring object's images in the measurement process: Firs...

  19. 改进的基于特征点软组织厚度的颅面复原方法%Improved method for 3 D craniofacial reconstruction based on soft tissue depths of landmarks

    Institute of Scientific and Technical Information of China (English)

    热孜万古丽·夏米西丁; 耿国华; 邓擎琼; 赵万荣; 郑磊

    2016-01-01

    现有的三维颅面复原技术大多依据颅骨特征点的软组织厚度统计值。针对现有统计值指标涵盖的年龄、胖瘦等属性段较宽泛导致复原面貌缺乏个性的缺点,提出了一种改进方法。首先通过CT扫描仪获得颅面样本数据,并通过图像重构获得三维颅骨和人脸模型;然后采用一种半自动特征点标定方法对三维颅骨样本进行特征点标定,并求解特征点软组织厚度;之后采用支持向量回归方法构建特征点软组织厚度与属性之间的函数关系;最后根据待复原颅骨的属性以及回归函数计算特征点软组织厚度,在此基础上采用薄板样条函数对参考人脸模型进行变形获得复原面貌。实验结果表明,相比于已有方法,该方法能获得更准确的软组织厚度,提高颅面复原的准确度。%Most of the 3D craniofacial reconstruction methods rely on the statistical data of soft tissue depths of sparse land-marks located on the skull.The classical statistical method for tissue depth is to classify samples into several clusters accor-ding to the properties (gender,age and BMI)of the samples,and then calculates the mean tissue depths for each cluster. However,each cluster covers a wide range of properties,for example,and BMI,leading to a result that are insensitive to the slight changes of properties.This paper proposed an improved method to solve this problem.The method first constructed a head database from CT images,and located 80 landmarks for each skull of the database by using a semi-automatic landmarking method.Then,it calculated the tissue depths of the 80 landmarks for all the skulls,and analyzed the relationship between tis-sue depth and properties,such as gender,age and BMI,for each landmark through support vector regression.When recon-structing the face for a given skull,it first calculated the tissue depths of landmarks according to the regression function and the properties of the

  20. IZDELAVA TISKALNIKA 3D

    OpenAIRE

    Brdnik, Lovro

    2015-01-01

    Diplomsko delo analizira trenutno stanje 3D tiskalnikov na trgu. Prikazan je razvoj in principi delovanja 3D tiskalnikov. Predstavljeni so tipi 3D tiskalnikov, njihove prednosti in slabosti. Podrobneje je predstavljena zgradba in delovanje koračnih motorjev. Opravljene so meritve koračnih motorjev. Opisana je programska oprema za rokovanje s 3D tiskalniki in komponente, ki jih potrebujemo za izdelavo. Diploma se oklepa vprašanja, ali je izdelava 3D tiskalnika bolj ekonomična kot pa naložba v ...

  1. Towards real-time change detection in videos based on existing 3D models

    Science.gov (United States)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  2. Volumetric 3D Display System with Static Screen

    Science.gov (United States)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  3. Influence of object location in cone beam computed tomography (NewTom 5G and 3D Accuitomo 170) on gray value measurements at an implant site

    NARCIS (Netherlands)

    Parsa, A.; Ibrahim, N.; Hassan, B.; van der Stelt, P.; Wismeijer, D.

    2014-01-01

    Objectives The aim of this study was to determine the gray value variation at an implant site with different object location within the selected field of view (FOV) in two cone beam computed tomography (CBCT) scanners. Methods A 1-cm-thick section from the edentulous region of a dry human mandible

  4. Lateralized Effects of Categorical and Coordinate Spatial Processing of Component Parts on the Recognition of 3D Non-Nameable Objects

    Science.gov (United States)

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-01-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…

  5. Influence of object location in cone beam computed tomography (NewTom 5G and 3D Accuitomo 170) on gray value measurements at an implant site

    NARCIS (Netherlands)

    Parsa, A.; Ibrahim, N.; Hassan, B.; van der Stelt, P.; Wismeijer, D.

    2014-01-01

    Objectives The aim of this study was to determine the gray value variation at an implant site with different object location within the selected field of view (FOV) in two cone beam computed tomography (CBCT) scanners. Methods A 1-cm-thick section from the edentulous region of a dry human mandible w

  6. Influence of object location in cone beam computed tomography (NewTom 5G and 3D Accuitomo 170) on gray value measurements at an implant site

    NARCIS (Netherlands)

    Parsa, A.; Ibrahim, N.; Hassan, B.; van der Stelt, P.; Wismeijer, D.

    2014-01-01

    Objectives The aim of this study was to determine the gray value variation at an implant site with different object location within the selected field of view (FOV) in two cone beam computed tomography (CBCT) scanners. Methods A 1-cm-thick section from the edentulous region of a dry human mandible w

  7. [Comparison of quality on digital X-ray devices with 3D-capability for ENT-clinical objectives in imaging of temporal bone and paranasal sinuses].

    Science.gov (United States)

    Knörgen, M; Brandt, S; Kösling, S

    2012-12-01

    Comparison of dosage and spatial resolution of digital X-Ray devices with 3D-capability in head and neck imaging. Three on-site X-Ray devices, a general purpose multi-slice CT (CT), a dedicated cone-beam CT (CBCT) and the CT-mode of a device for digital angiography (DSA) of the same generation were compared using paranasal sinus (PNS) and temporal bone imaging protocols. The radiation exposure was measured with a puncture measuring chamber on a CTDI head phantom as well as with chip-strate-dosimeters on an Alderson head phantom in the regions of the eyes and thyroid gland. By using the Alderson head phantom, the specific dosage of the X-Ray device with regard to different protocols was read out. For the assessment of the high-contrast resolution of the devices, images of a self-made phantom were qualitatively analysed by six observers. The three devices showed marked variations in the dosage and spatial resolution depending on the protocol and/or modus. In both parameters, CBCT was superior to CT and DSA using standard protocols, with the difference being less obvious for the investigation with PNS. For high-contrast investigations CBCT CT is a remarkable option in head and neck radiology. © Georg Thieme Verlag KG Stuttgart · New York.

  8. 3D-Barolo: 3D fitting tool for the kinematics of galaxies

    NARCIS (Netherlands)

    Di Teodoro, E. M.; Fraternali, F.

    3D-Barolo (3D-Based Analysis of Rotating Object via Line Observations) or BBarolo is a tool for fitting 3D tilted-ring models to emission-line datacubes. BBarolo works with 3D FITS files, i.e. image arrays with two spatial and one spectral dimensions. BBarolo recovers the true rotation curve and

  9. Reducing multisensor satellite monthly mean aerosol optical depth uncertainty: 1. Objective assessment of current AERONET locations

    Science.gov (United States)

    Li, Jing; Li, Xichen; Carlson, Barbara E.; Kahn, Ralph A.; Lacis, Andrew A.; Dubovik, Oleg; Nakajima, Teruyuki

    2016-11-01

    Various space-based sensors have been designed and corresponding algorithms developed to retrieve aerosol optical depth (AOD), the very basic aerosol optical property, yet considerable disagreement still exists across these different satellite data sets. Surface-based observations aim to provide ground truth for validating satellite data; hence, their deployment locations should preferably contain as much spatial information as possible, i.e., high spatial representativeness. Using a novel Ensemble Kalman Filter (EnKF)-based approach, we objectively evaluate the spatial representativeness of current Aerosol Robotic Network (AERONET) sites. Multisensor monthly mean AOD data sets from Moderate Resolution Imaging Spectroradiometer, Multiangle Imaging Spectroradiometer, Sea-viewing Wide Field-of-view Sensor, Ozone Monitoring Instrument, and Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar are combined into a 605-member ensemble, and AERONET data are considered as the observations to be assimilated into this ensemble using the EnKF. The assessment is made by comparing the analysis error variance (that has been constrained by ground-based measurements), with the background error variance (based on satellite data alone). Results show that the total uncertainty is reduced by 27% on average and could reach above 50% over certain places. The uncertainty reduction pattern also has distinct seasonal patterns, corresponding to the spatial distribution of seasonally varying aerosol types, such as dust in the spring for Northern Hemisphere and biomass burning in the fall for Southern Hemisphere. Dust and biomass burning sites have the highest spatial representativeness, rural and oceanic sites can also represent moderate spatial information, whereas the representativeness of urban sites is relatively localized. A spatial score ranging from 1 to 3 is assigned to each AERONET site based on the uncertainty reduction

  10. TEHNOLOGIJE 3D TISKALNIKOV

    OpenAIRE

    Kolar, Nataša

    2016-01-01

    Diplomsko delo predstavi razvoj tiskanja skozi čas. Podrobneje so opisani 3D tiskalniki, ki uporabljajo različne tehnologije 3D tiskanja. Predstavljene so različne tehnologije 3D tiskanja, njihova uporaba in narejeni prototipi oz. končni izdelki. Diplomsko delo opiše celoten postopek, od zamisli, priprave podatkov in tiskalnika do izdelave prototipa oz. končnega izdelka.

  11. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    Science.gov (United States)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  12. 3D Printing and Its Urologic Applications

    Science.gov (United States)

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  13. 3D Printing and Its Urologic Applications.

    Science.gov (United States)

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology.

  14. Single-shot 3D motion picture camera with a dense point cloud

    CERN Document Server

    Willomitzer, Florian

    2016-01-01

    We introduce a method and a 3D-camera for single-shot 3D shape measurement, with unprecedented features: The 3D-camera does not rely on pattern codification and acquires object surfaces at the theoretical limit of the information efficiency: Up to 30% of the available camera pixels display independent (not interpolated) 3D points. The 3D-camera is based on triangulation with two properly positioned cameras and a projected multi-line pattern, in combination with algorithms that solve the ambiguity problem. The projected static line pattern enables 3D-acquisition of fast processes and the take of 3D-motion-pictures. The depth resolution is at its physical limit, defined by electronic noise and speckle noise. The requisite low cost technology is simple.

  15. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    Science.gov (United States)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  16. An interactive multiview 3D display system

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  17. 3D laptop for defense applications

    Science.gov (United States)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  18. 3D mudeli koostamine Kinect v2 kaamera abil

    OpenAIRE

    Valgma, Lembit

    2016-01-01

    Kinect is an easy to use and a ordable RGB-D acquisition device that provides both spatial and color information for captured pixels. That makes it an attractive alternative to regular 3D scanning devices that usually cost signi cantly more and do not provide color info. Second generation of Kinect (v2) provides even better quality depth and color images to user. This thesis describes and implements method for 3D reconstruction using Kinect v2. Method suitability for various objects is ...

  19. Implications for the crustal Architecture in West Antarctica revealed by the means of depth-to-the-bottom of the magnetic source (DBMS) mapping and 3D FEM geothermal heat flux models

    Science.gov (United States)

    Dziadek, Ricarda; Gohl, Karsten; Kaul, Norbert

    2017-04-01

    The West Antarctic Rift System (WARS) is one of the largest rift systems in the world, which displays unique coupled relationships between tectonic processes and ice sheet dynamics. Palaeo-ice streams have eroded troughs across the Amundsen Sea Embayment (ASE) that today route warm ocean deep water to the West Antarctic Ice Sheet (WAIS) grounding zone and reinforce dynamic ice sheet thinning. Rift basins, which cut across West Antarctica's landward-sloping shelves, promote ice sheet instability. Young, continental rift systems are regions with significantly elevated geothermal heat flux (GHF), because the transient thermal perturbation to the lithosphere caused by rifting requires 100 m.y. to reach long-term thermal equilibrium. The GHF in this region is, especially on small scales, poorly constrained and suspected to be heterogeneous as a reflection of the distribution of tectonic and volcanic activity along the complex branching geometry of the WARS, which reflects its multi-stage history and structural inheritance. We investigate the crustal architecture and the possible effects of rifting history from the WARS on the ASE ice sheet dynamics, by the use of depth-to-the-bottom of the magnetic source (DBMS) estimates. These are based on airborne-magnetic anomaly data and provide an additional insight into the deeper crustal properties. With the DBMS estimates we reveal spatial changes at the bottom of the igneous crust and the thickness of the magnetic layer, which can be further incorporated into tectonic interpretations. The DBMS also marks an important temperature transition zone of approximately 580°C and therefore serves as a boundary condition for our numerical FEM models in 2D and 3D. On balance, and by comparison to global values, we find average GHF of 90 mWm-2 with spatial variations due to crustal heterogeneities and volcanic activities. This estimate is 30% more than commonly used in ice sheet models in the ASE region.

  20. 3D virtuel udstilling

    DEFF Research Database (Denmark)

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  1. Interobserver variation in measurements of Cesarean scar defect and myometrium with 3D ultrasonography

    DEFF Research Database (Denmark)

    Madsen, Lene Duch; Glavind, Julie; Uldbjerg, Niels;

    -16 months after their first Cesarean section with 2D transvaginal sonography and had 3D volumes recorded. Two observers independently evaluated “off-line” each of the 3D volumes stored. Residual myometrial thickness (RMT) and Cesarean scar defect depth (D) was measured in the sagittal plane with an interval...... of Cesarean section scar size and residual myometrium needs further investigation.......Objectives: To evaluate the Cesarean scar defect depth and the residual myometrial thickness with 3-dimensional (3D) sonography concerning interobserver variation. Methods: Ten women were randomly selected from a larger cohort of Cesarean scar ultrasound evaluations. All women were examined 6...

  2. Development of objective provision trees for Sodium-Cooled Fast Reactor Defense-in-depth evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Huichang [TUEV Rheinland Korea Ltd., Seoul (Korea, Republic of); Suh, Namduk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    KALIMER is one of sodium-cooled fast reactor and being developed by Korea Atomic Energy Research Institute (KAERI), was developed and suggested in this paper. Developed OPT is for the defense-in-depth level 3, core heat removal safety function. Using OPT method, the evaluation of defense-in-depth implementation for the design features of KALIMER reactors were tried in this study. To utilize the design information of KALIMER, challenges in OPTs which are under development in this study, were identified based on the system physical boundaries. This approach make the identification of possible and postulated challenges much clear and this will be a benefit to further identification of provisions in KALIMER design. OPTs for other levels of defense-in-depth and other safety functions are under development.

  3. MPML3D: Scripting Agents for the 3D Internet.

    Science.gov (United States)

    Prendinger, Helmut; Ullrich, Sebastian; Nakasone, Arturo; Ishizuka, Mitsuru

    2011-05-01

    The aim of this paper is two-fold. First, it describes a scripting language for specifying communicative behavior and interaction of computer-controlled agents ("bots") in the popular three-dimensional (3D) multiuser online world of "Second Life" and the emerging "OpenSimulator" project. While tools for designing avatars and in-world objects in Second Life exist, technology for nonprogrammer content creators of scenarios involving scripted agents is currently missing. Therefore, we have implemented new client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and nonverbal behavior of interacting animated agents. Second, the paper compares Second Life and OpenSimulator platforms and discusses the merits and limitations of each from the perspective of agent control. Here, we also conducted a small study that compares the network performance of both platforms.

  4. Intraoral 3D scanner

    Science.gov (United States)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  5. Objects' 3D Modeling in Virtual Cockpit System%虚拟座舱系统中的三维建模方法

    Institute of Scientific and Technical Information of China (English)

    翟正军; 秦晓红; 李宗明

    2001-01-01

    According to the peculiarity of virtual models in the virtual cockpit system, this paper expatiates on the methods of regular and irregular objects' geometric modeling and the model reduction.%本文针对虚拟座舱系统中虚拟模型的特点,分别对规则模型和非规则模型的建模算法进行了深入研究,实现了三维真实感模型的生成与简化。

  6. 3D Reconstruction of Regular Objects Based on 2D Grey Image%基于二维灰度图像的规则形体三维重建

    Institute of Scientific and Technical Information of China (English)

    钱苏斌

    2012-01-01

    Deep research on 3D reconstruction from the 2D grey image was made and a method was prop- osed which used the symmetrical characteristics to complete the reconstruction of the regular object based on single image. The method extracted the regular object from the 2D grey image as the research target, adopted comer detection to acquire valid characteristic points and then acquired the plane or axis of sym- metry on the basis of the characteristic information, finally combined the geometric properties of the object to realize the 3D reconstruction of object surface. The experiment indicates that the method can conveniently reconstruct the 3D shape of the regular object according to its 2D grey image.%对二维灰度图像恢复形体的三维形状问题做了深入研究,提出了一种基于单幅图像,利用几何对称特性进行规则形体重构的方法,即从一幅二维灰度图像中提取规则形体作为研究对象,通过角点检测方法获取其有效特征信息,根据相关特征信息构造对称面或者对称轴,结合形体的几何特性实现规则形体表面的三维重建.实验证明,该方法可以根据形体的二维灰度图像方便有效地重建出其三维形状.

  7. 基于单幅图像大尺寸实际平面上的光点定位方法%3D reconstruction of feature point on large scale object surface from a single image

    Institute of Scientific and Technical Information of China (English)

    杨明; 仲小清; 霍炬

    2011-01-01

    For the design and development of a vision sensor for the ground test, a method of 3D reconstruction of feature point on large scale object surface from a single image is proposed. By establishing the math model of the 3D coordinate of feature point on the object surface and using the spatial ray through feature point, the 3D coordinate of the feature point can be determined u sing sigle image. According to the characteristics, the object surface can be classified into three types, high order surface type, block plane type and block surface type, while the corresponding location methods are introduced. Simulation experiment is conducted to compare the accuracy of three different 3D reconstruction methods, and the proposed method performs the best. By the measurement precision of 1 mm in the range of 8 000 mm× 8 000 mm, it is proved that the proposed method is suitable for 3D reconstruction of feature point on large scale object surface.%为满足某地面实验中视觉测量系统的研制需求,研究了一种基于单幅图像的大尺寸实际平面上光点的定位方法.通过建立关于实际平面上光点3D坐标的数学模型,结合利用图像确定的通过光点的空间光束,实现了由单幅图像确定光点的3D坐标.依据具体特点,把实际平面分为高阶曲面型、分块平面型和分块曲面型3种类型,并分别给出了相应类型的光点定位方法.仿真实验验证了所提出方法的正确性,并比较分析了方法的精确性;实际试验数据表明,所提出方法可以满足尺寸为8 000 mm × 8 000 mm的大型实际平面上光点的定位需求,定位误差小于1 mm.

  8. Structured light field 3D imaging.

    Science.gov (United States)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-05

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

  9. Blender 3D cookbook

    CERN Document Server

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  10. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    Science.gov (United States)

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.

  11. Sliding Adjustment for 3D Video Representation

    Directory of Open Access Journals (Sweden)

    Galpin Franck

    2002-01-01

    Full Text Available This paper deals with video coding of static scenes viewed by a moving camera. We propose an automatic way to encode such video sequences using several 3D models. Contrary to prior art in model-based coding where 3D models have to be known, the 3D models are automatically computed from the original video sequence. We show that several independent 3D models provide the same functionalities as one single 3D model, and avoid some drawbacks of the previous approaches. To achieve this goal we propose a novel algorithm of sliding adjustment, which ensures consistency of successive 3D models. The paper presents a method to automatically extract the set of 3D models and associate camera positions. The obtained representation can be used for reconstructing the original sequence, or virtual ones. It also enables 3D functionalities such as synthetic object insertion, lightning modification, or stereoscopic visualization. Results on real video sequences are presented.