WorldWideScience

Sample records for 3d image based

  1. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  2. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  3. Augmented reality 3D display based on integral imaging

    Science.gov (United States)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  4. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  5. 3D Motion Parameters Determination Based on Binocular Sequence Images

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  6. 3D Medical Image Segmentation Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  7. DCT and DST Based Image Compression for 3D Reconstruction

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  8. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  9. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    Science.gov (United States)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  10. Physically based analysis of deformations in 3D images

    Science.gov (United States)

    Nastar, Chahab; Ayache, Nicholas

    1993-06-01

    We present a physically based deformable model which can be used to track and to analyze the non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the natural lengths of the springs is set equal to zero, and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, whatever the amplitude of the deformation. It provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track, and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images.

  11. Ultra-realistic 3-D imaging based on colour holography

    Science.gov (United States)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  12. 3D photoacoustic imaging

    Science.gov (United States)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  13. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  14. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  15. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  16. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  17. Intensity-based image registration for 3D spatial compounding using a freehand 3D ultrasound system

    Science.gov (United States)

    Pagoulatos, Niko; Haynor, David R.; Kim, Yongmin

    2002-04-01

    3D spatial compounding involves the combination of two or more 3D ultrasound (US) data sets, acquired under different insonation angles and windows, to form a higher quality 3D US data set. An important requirement for this method to succeed is the accurate registration between the US images used to form the final compounded image. We have developed a new automatic method for rigid and deformable registration of 3D US data sets, acquired using a freehand 3D US system. Deformation is provided by using a 3D thin-plate spline (TPS). Our method is fundamentally different from the previous ones in that the acquired scattered US 2D slices are registered and compounded directly into the 3D US volume. Our approach has several benefits over the traditional registration and spatial compounding methods: (i) we only peform one 3D US reconstruction, for the first acquired data set, therefore we save the computation time required to reconstruct subsequent acquired scans, (ii) for our registration we use (except for the first scan) the acquired high-resolution 2D US images rather than the 3D US reconstruction data which are of lower quality due to the interpolation and potential subsampling associated with 3D reconstruction, and (iii) the scans performed after the first one are not required to follow the typical 3D US scanning protocol, where a large number of dense slices have to be acquired; slices can be acquired in any fashion in areas where compounding is desired. We show that by taking advantage of the similar information contained in adjacent acquired 2D US slices, we can reduce the computation time of linear and nonlinear registrations by a factor of more than 7:1, without compromising registration accuracy. Furthermore, we implemented an adaptive approximation to the 3D TPS with local bilinear transformations allowing additional reduction of the nonlinear registration computation time by a factor of approximately 3.5. Our results are based on a commercially available

  18. Research of Fast 3D Imaging Based on Multiple Mode

    Science.gov (United States)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  19. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  20. 3D Image Sensor based on Parallax Motion

    Directory of Open Access Journals (Sweden)

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  1. Cytology 3D structure formation based on optical microscopy images

    Science.gov (United States)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  2. Contactless operating table control based on 3D image processing.

    Science.gov (United States)

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  3. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  4. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.

    Science.gov (United States)

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-09-19

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  5. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    Directory of Open Access Journals (Sweden)

    Yingzhi Kan

    2016-09-01

    Full Text Available In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D imaging is proposed that uses a two-dimensional (2-D plane antenna array. First, a two-dimensional fast Fourier transform (FFT is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT combined with 2-D inverse FFT (IFFT is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  6. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    CERN Document Server

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  7. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  8. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  9. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    Science.gov (United States)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  10. 3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph.

    Science.gov (United States)

    Xu, Wei-Hai; Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie; Wu, Jian-Huang

    2014-08-01

    Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to 'print' the segments of intracranial arteries based on magnetic resonance imaging. Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 µm. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Seven responders marked "grade 1" to 3D printing results, while one marked "grade 4". Therefore, 87.5% of the clinicians considered the 3D printing were successful. Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice.

  11. 3D Modeling of Transformer Substation Based on Mapping and 2D Images

    Directory of Open Access Journals (Sweden)

    Lei Sun

    2016-01-01

    Full Text Available A new method for building 3D models of transformer substation based on mapping and 2D images is proposed in this paper. This method segments objects of equipment in 2D images by using k-means algorithm in determining the cluster centers dynamically to segment different shapes and then extracts feature parameters from the divided objects by using FFT and retrieves the similar objects from 3D databases and then builds 3D models by computing the mapping data. The method proposed in this paper can avoid the complex data collection and big workload by using 3D laser scanner. The example analysis shows the method can build coarse 3D models efficiently which can meet the requirements for hazardous area classification and constructions representations of transformer substation.

  12. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  13. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    Science.gov (United States)

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  14. Image Sequence Fusion and Denoising Based on 3D Shearlet Transform

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2014-01-01

    Full Text Available We propose a novel algorithm for image sequence fusion and denoising simultaneously in 3D shearlet transform domain. In general, the most existing image fusion methods only consider combining the important information of source images and do not deal with the artifacts. If source images contain noises, the noises may be also transferred into the fusion image together with useful pixels. In 3D shearlet transform domain, we propose that the recursive filter is first performed on the high-pass subbands to obtain the denoised high-pass coefficients. The high-pass subbands are then combined to employ the fusion rule of the selecting maximum based on 3D pulse coupled neural network (PCNN, and the low-pass subband is fused to use the fusion rule of the weighted sum. Experimental results demonstrate that the proposed algorithm yields the encouraging effects.

  15. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Science.gov (United States)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  16. 3D-SIFT-Flow for atlas-based CT liver image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yan, E-mail: xuyan04@gmail.com [State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing 100191, China and Research Institute of Beihang University in Shenzhen and Microsoft Research, Beijing 100080 (China); Xu, Chenchao, E-mail: chenchaoxu33@gmail.com; Kuang, Xiao, E-mail: kuangxiao.ace@gmail.com [School of Biological Science and Medical Engineering, Beihang University, Beijing 100191 (China); Wang, Hongkai, E-mail: wang.hongkai@gmail.com [Department of Biomedical Engineering, Dalian University of Technology, Dalian 116024 (China); Chang, Eric I-Chao, E-mail: eric.chang@microsoft.com [Microsoft Research, Beijing 100080 (China); Huang, Weimin, E-mail: wmhuang@i2r.a-star.edu.sg [Institute for Infocomm Research (I2R), Singapore 138632 (Singapore); Fan, Yubo, E-mail: yubofan@buaa.edu.cn [Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing 100191 (China)

    2016-05-15

    Purpose: In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. Methods: In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Results: Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Conclusions: Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.

  17. 3D-SIFT-Flow for atlas-based CT liver image segmentation.

    Science.gov (United States)

    Xu, Yan; Xu, Chenchao; Kuang, Xiao; Wang, Hongkai; Chang, Eric I-Chao; Huang, Weimin; Fan, Yubo

    2016-05-01

    In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.

  18. Midsagittal plane extraction from brain images based on 3D SIFT.

    Science.gov (United States)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-21

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  19. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    studies and in vivo. Phantom measurements are compared with their corresponding reference value, whereas the in vivo measurement is validated against the current golden standard for non-invasive blood velocity estimates, based on magnetic resonance imaging (MRI). The study concludes, that a high precision......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges......For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...

  20. Comparison Between Two Generic 3d Building Reconstruction Approaches - Point Cloud Based VS. Image Processing Based

    Science.gov (United States)

    Dahlke, D.; Linkiewicz, M.

    2016-06-01

    This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.

  1. Single-pixel 3D imaging with time-based depth resolution

    CERN Document Server

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  2. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  3. ENHANCING CLOSE-UP IMAGE BASED 3D DIGITISATION WITH FOCUS STACKING

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2017-08-01

    Full Text Available The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm, with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.

  4. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Directory of Open Access Journals (Sweden)

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  5. A web-based 3D medical image collaborative processing system with videoconference

    Science.gov (United States)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  6. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    Institute of Scientific and Technical Information of China (English)

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  7. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  8. A web-based solution for 3D medical image visualization

    Science.gov (United States)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  9. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  10. A fast convolution-based methodology to simulate 2-D/3-D cardiac ultrasound images.

    Science.gov (United States)

    Gao, Hang; Choi, Hon Fai; Claus, Piet; Boonen, Steven; Jaecques, Siegfried; Van Lenthe, G Harry; Van der Perre, Georges; Lauriks, Walter; D'hooge, Jan

    2009-02-01

    This paper describes a fast convolution-based methodology for simulating ultrasound images in a 2-D/3-D sector format as typically used in cardiac ultrasound. The conventional convolution model is based on the assumption of a space-invariant point spread function (PSF) and typically results in linear images. These characteristics are not representative for cardiac data sets. The spatial impulse response method (IRM) has excellent accuracy in the linear domain; however, calculation time can become an issue when scatterer numbers become significant and when 3-D volumetric data sets need to be computed. As a solution to these problems, the current manuscript proposes a new convolution-based methodology in which the data sets are produced by reducing the conventional 2-D/3-D convolution model to multiple 1-D convolutions (one for each image line). As an example, simulated 2-D/3-D phantom images are presented along with their gray scale histogram statistics. In addition, the computation time is recorded and contrasted to a commonly used implementation of IRM (Field II). It is shown that COLE can produce anatomically plausible images with local Rayleigh statistics but at improved calculation time (1200 times faster than the reference method).

  11. Reproducibility study of 3D SSFP phase-based brain conductivity imaging

    NARCIS (Netherlands)

    Stehning, C.; Katscher, U.; Keupp, J.

    2012-01-01

    Noninvasive MR-based Electric Properties Tomography (EPT) forms a framework for an accurate determination of local SAR, and may providea diagnostic parameter in oncology. 3D SSFP sequences were found tobe a promising candidate for fast volumetric conductivity imaging. In this work, an in vivo study

  12. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  13. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    Science.gov (United States)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  14. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    to produce high quality 3-D images. Because of the large matrix transducers with integrated custom electronics, these systems are extremely expensive. The relatively low price of ultrasound scanners is one of the factors for the widespread use of ultrasound imaging. The high price tag on the high quality 3-D......The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...

  15. Superimposing of virtual graphics and real image based on 3D CAD information

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  16. A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging

    Science.gov (United States)

    Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.

    2015-03-01

    Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.

  17. [Rapid 2D-3D medical image registration based on CUDA].

    Science.gov (United States)

    Li, Lingzhi; Zou, Beiji

    2014-08-01

    The medical image registration between preoperative three-dimensional (3D) scan data and intraoperative two-dimensional (2D) image is a key technology in the surgical navigation. Most previous methods need to generate 2D digitally reconstructed radiographs (DRR) images from the 3D scan volume data, then use conventional image similarity function for comparison. This procedure includes a large amount of calculation and is difficult to archive real-time processing. In this paper, with using geometric feature and image density mixed characteristics, we proposed a new similarity measure function for fast 2D-3D registration of preoperative CT and intraoperative X-ray images. This algorithm is easy to implement, and the calculation process is very short, while the resulting registration accuracy can meet the clinical use. In addition, the entire calculation process is very suitable for highly parallel numerical calculation by using the algorithm based on CUDA hardware acceleration to satisfy the requirement of real-time application in surgery.

  18. Sample based 3D face reconstruction from a single frontal image by adaptive locally linear embedding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian; ZHUANG Yue-ting

    2007-01-01

    In this paper, we propose a highly automatic approach for 3D photorealistic face reconstruction from a single frontal image. The key point of our work is the implementation of adaptive manifold learning approach. Beforehand, an active appearance model (AAM) is trained for automatic feature extraction and adaptive locally linear embedding (ALLE) algorithm is utilized to reduce the dimensionality of the 3D database. Then, given an input frontal face image, the corresponding weights between 3D samples and the image are synthesized adaptively according to the AAM selected facial features. Finally, geometry reconstruction is achieved by linear weighted combination of adaptively selected samples. Radial basis function (RBF) is adopted to map facial texture from the frontal image to the reconstructed face geometry. The texture of invisible regions between the face and the ears is interpolated by sampling from the frontal image. This approach has several advantages: (1) Only a single frontal face image is needed for highly automatic face reconstruction; (2) Compared with former works, our reconstruction approach provides higher accuracy; (3) Constraint based RBF texture mapping provides natural appearance for reconstructed face.

  19. 3D Reconstruction from a Single Still Image Based on Monocular Vision of an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available we propose a framework of combining Machine Learning with Dynamic Optimization for reconstructing scene in 3D automatically from a single still image of unstructured outdoor environment based on monocular vision of an uncalibrated camera. After segmenting image first time, a kind of searching tree strategy based on Bayes rule is used to identify the hierarchy of all areas on occlusion. After superpixel segmenting image second time, the AdaBoost algorithm is applied in the integration detection to the depth of lighting, texture and material. Finally, all the factors above are optimized with constrained conditions, acquiring the whole depthmap of an image. Integrate the source image with its depthmap in point-cloud or bilinear interpolation styles, realizing 3D reconstruction. Experiment in comparisons with typical methods in associated database demonstrates our method improves the reasonability of estimation to the overall 3D architecture of image’s scene to a certain extent. And it does not need any manual assist and any camera model information.

  20. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    Science.gov (United States)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  1. Dual optimization based prostate zonal segmentation in 3D MR images.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-05-01

    Efficient and accurate segmentation of the prostate and two of its clinically meaningful sub-regions: the central gland (CG) and peripheral zone (PZ), from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, a novel multi-region segmentation approach is proposed to simultaneously segment the prostate and its two major sub-regions from only a single 3D T2-weighted (T2w) MR image, which makes use of the prior spatial region consistency and incorporates a customized prostate appearance model into the segmentation task. The formulated challenging combinatorial optimization problem is solved by means of convex relaxation, for which a novel spatially continuous max-flow model is introduced as the dual optimization formulation to the studied convex relaxed optimization problem with region consistency constraints. The proposed continuous max-flow model derives an efficient duality-based algorithm that enjoys numerical advantages and can be easily implemented on GPUs. The proposed approach was validated using 18 3D prostate T2w MR images with a body-coil and 25 images with an endo-rectal coil. Experimental results demonstrate that the proposed method is capable of efficiently and accurately extracting both the prostate zones: CG and PZ, and the whole prostate gland from the input 3D prostate MR images, with a mean Dice similarity coefficient (DSC) of 89.3±3.2% for the whole gland (WG), 82.2±3.0% for the CG, and 69.1±6.9% for the PZ in 3D body-coil MR images; 89.2±3.3% for the WG, 83.0±2.4% for the CG, and 70.0±6.5% for the PZ in 3D endo-rectal coil MR images. In addition, the experiments of intra- and inter-observer variability introduced by user initialization indicate a good reproducibility of the proposed approach in terms of volume difference (VD) and coefficient-of-variation (CV) of DSC. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. 3-D ultrasonic strain imaging based on a linear scanning system.

    Science.gov (United States)

    Huang, Qinghua; Xie, Bo; Ye, Pengfei; Chen, Zhaohong

    2015-02-01

    This paper introduces a 3-D strain imaging method based on a freehand linear scanning mode. We designed a linear sliding track with a position sensor and a height-adjustable holder to constrain the movement of an ultrasound probe in a freehand manner. When moving the probe along the sliding track, the corresponding positional measures for the probe are transmitted via a wireless communication module based on Bluetooth in real time. In a single examination, the probe is scanned in two sweeps in which the height of the probe is adjusted by the holder to collect the pre- and postcompression radio-frequency echoes, respectively. To generate a 3-D strain image, a volume cubic in which the voxels denote relative strains for tissues is defined according to the range of the two sweeps. With respect to the post-compression frames, several slices in the volume are determined and the pre-compression frames are re-sampled to precisely correspond to the post-compression frames. Thereby, a strain estimation method based on minimizing a cost function using dynamic programming is used to obtain the 2-D strain image for each pair of frames from the re-sampled pre-compression sweep and the post-compression sweep, respectively. A software system is developed for volume reconstruction, visualization, and measurement of the 3-D strain images. The experimental results show that high-quality 3-D strain images of phantom and human tissues can be generated by the proposed method, indicating that the proposed system can be applied for real clinical applications (e.g., musculoskeletal assessments).

  3. Skeletonization algorithm-based blood vessel quantification using in vivo 3D photoacoustic imaging

    Science.gov (United States)

    Meiburger, K. M.; Nam, S. Y.; Chung, E.; Suggs, L. J.; Emelianov, S. Y.; Molinari, F.

    2016-11-01

    Blood vessels are the only system to provide nutrients and oxygen to every part of the body. Many diseases can have significant effects on blood vessel formation, so that the vascular network can be a cue to assess malicious tumor and ischemic tissues. Various imaging techniques can visualize blood vessel structure, but their applications are often constrained by either expensive costs, contrast agents, ionizing radiations, or a combination of the above. Photoacoustic imaging combines the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging, and image contrast depends on optical absorption. This enables the detection of light absorbing chromophores such as hemoglobin with a greater penetration depth compared to purely optical techniques. We present here a skeletonization algorithm for vessel architectural analysis using non-invasive photoacoustic 3D images acquired without the administration of any exogenous contrast agents. 3D photoacoustic images were acquired on rats (n  =  4) in two different time points: before and after a burn surgery. A skeletonization technique based on the application of a vesselness filter and medial axis extraction is proposed to extract the vessel structure from the image data and six vascular parameters (number of vascular trees (NT), vascular density (VD), number of branches (NB), 2D distance metric (DM), inflection count metric (ICM), and sum of angles metric (SOAM)) were calculated from the skeleton. The parameters were compared (1) in locations with and without the burn wound on the same day and (2) in the same anatomic location before (control) and after the burn surgery. Four out of the six descriptors were statistically different (VD, NB, DM, ICM, p  approach to obtain quantitative characterization of the vascular network from 3D photoacoustic images without any exogenous contrast agent which can assess microenvironmental changes related to

  4. Fusing Multiscale Charts into 3D ENC Systems Based on Underwater Topography and Remote Sensing Image

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2015-01-01

    Full Text Available The purpose of this study is to propose an approach to fuse multiscale charts into three-dimensional (3D electronic navigational chart (ENC systems based on underwater topography and remote sensing image. This is the first time that the fusion of multiscale standard ENCs in the 3D ENC system has been studied. First, a view-dependent visualization technology is presented for the determination of the display condition of a chart. Second, a map sheet processing method is described for dealing with the map sheet splice problem. A process order called “3D order” is designed to adapt to the characteristics of the chart. A map sheet clipping process is described to deal with the overlap between the adjacent map sheets. And our strategy for map sheet splice is proposed. Third, the rendering method for ENC objects in the 3D ENC system is introduced. Fourth, our picking-up method for ENC objects is proposed. Finally, we implement the above methods in our system: automotive intelligent chart (AIC 3D electronic chart display and information systems (ECDIS. And our method can handle the fusion problem well.

  5. Single Camera 3-D Coordinate Measuring System Based on Optical Probe Imaging

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new vision coordinate measuring system——single camera 3-D coordinate measuring system based on optical probe imaging is presented. A new idea in vision coordinate measurement is proposed. A linear model is deduced which can distinguish six freedom degrees of optical probe to realize coordinate measurement of the object surface. The effects of some factors on the resolution of the system are analyzed. The simulating experiments have shown that the system model is available.

  6. [A 3D-ultrasound imaging system based on back-end scanning mode].

    Science.gov (United States)

    Qi, Jian; Chen, Yimin; Ding, Mingyue; Wei, Chiming

    2012-07-01

    A new scanning mode is proposed that the front-end of the probe is fixed, while the back-end makes fan-shaped, scanning movement. The new scanning mode avoided ribs drawbacks successfully. Based on the new scanning mode a 3D-Ultrasound Images System is accomplished to acquire 2D data of fetusfetus fetusfetus phantom and livers and kidneys, to demonstrates the effectiveness of the new scanning mode.

  7. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    Science.gov (United States)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  8. IMAGE-BASED AIRBORNE LiDAR POINT CLOUD ENCODING FOR 3D BUILDING MODEL RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Y.-C. Chen

    2016-06-01

    Full Text Available With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show

  9. A PC-based 3D imaging system: algorithms, software, and hardware considerations.

    Science.gov (United States)

    Raya, S P; Udupa, J K; Barrett, W A

    1990-01-01

    Three-dimensional (3D) imaging in medicine is known to produce easily and quickly derivable medically relevant information, especially in complex situations. We intend to demonstrate in this paper, that with an appropriate choice of approaches and a proper design of algorithms and software, it is possible to develop a low-cost 3D imaging system that can provide a level of performance sufficient to meet the daily case load in an individual or even group-practice situation. We describe hardware considerations of a generic system and give an example of a specific system we used for our implementation. Given a 3D image as a stack of slices, we generate a packed binary cubic voxel array, by combining segmentation (density thresholding), interpolation, and packing in an efficient way. Since threshold-based segmentation is very often not perfect, object-like structures and noise clutter the binary scene. We utilize an effective mechanism to isolate the object from this clutter by tracking a specified, connected surface of the object. The surface description thus obtained is rendered to create a depiction of the surface on a 2D display screen. Efficient implementation of hidden-part removal and image-space shading and a simple and fast antialiasing technique provide a level of performance which otherwise would not have been possible in a PC environment. We outline our software emphasizing some design aspects and present some clinical examples.

  10. Internet2-based 3D PET image reconstruction using a PC cluster.

    Science.gov (United States)

    Shattuck, D W; Rapela, J; Asma, E; Chatzioannou, A; Qi, J; Leahy, R M

    2002-08-07

    We describe an approach to fast iterative reconstruction from fully three-dimensional (3D) PET data using a network of PentiumIII PCs configured as a Beowulf cluster. To facilitate the use of this system, we have developed a browser-based interface using Java. The system compresses PET data on the user's machine, sends these data over a network, and instructs the PC cluster to reconstruct the image. The cluster implements a parallelized version of our preconditioned conjugate gradient method for fully 3D MAP image reconstruction. We report on the speed-up factors using the Beowulf approach and the impacts of communication latencies in the local cluster network and the network connection between the user's machine and our PC cluster.

  11. A neural network based 3D/3D image registration quality evaluator for the head-and-neck patient setup in the absence of a ground truth

    Energy Technology Data Exchange (ETDEWEB)

    Wu Jian; Murphy, Martin J. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2010-11-15

    Purpose: To develop a neural network based registration quality evaluator (RQE) that can identify unsuccessful 3D/3D image registrations for the head-and-neck patient setup in radiotherapy. Methods: A two-layer feed-forward neural network was used as a RQE to classify 3D/3D rigid registration solutions as successful or unsuccessful based on the features of the similarity surface near the point-of-solution. The supervised training and test data sets were generated by rigidly registering daily cone-beam CTs to the treatment planning fan-beam CTs of six patients with head-and-neck tumors. Two different similarity metrics (mutual information and mean-squared intensity difference) and two different types of image content (entire image versus bony landmarks) were used. The best solution for each registration pair was selected from 50 optimizing attempts that differed only by the initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error threshold to determine whether that solution was successful or not. The supervised training was then used to train the RQE. The performance of the RQE was evaluated using the test data set that consisted of registration results that were not used in training. Results: The RQE constructed using the mutual information had very good performance when tested using the test data sets, yielding the sensitivity, the specificity, the positive predictive value, and the negative predictive value in the ranges of 0.960-1.000, 0.993-1.000, 0.983-1.000, and 0.909-1.000, respectively. Adding a RQE into a conventional 3D/3D image registration system incurs only about 10%-20% increase of the overall processing time. Conclusions: The authors' patient study has demonstrated very good performance of the proposed RQE when used with the mutual information in identifying unsuccessful 3D/3D registrations for daily patient setup. The classifier

  12. Image-based 3D scene analysis for navigation of autonomous airborne systems

    Science.gov (United States)

    Jaeger, Klaus; Bers, Karl-Heinz

    2001-10-01

    In this paper we describe a method for automatic determination of sensor pose (position and orientation) related to a 3D landmark or scene model. The method is based on geometrical matching of 2D image structures with projected elements of the associated 3D model. For structural image analysis and scene interpretation, a blackboard-based production system is used resulting in a symbolic description of image data. Knowledge of the approximated sensor pose measured for example by IMU or GPS enables to estimate an expected model projection used for solving the correspondence problem of image structures and model elements. These correspondences are presupposed for pose computation carried out by nonlinear numerical optimization algorithms. We demonstrate the efficiency of the proposed method by navigation update approaching a bridge scenario and flying over urban area, whereas data were taken with airborne infrared sensors in high oblique view. In doing so we simulated image-based navigation for target engagement and midcourse guidance suited for the concepts of future autonomous systems like missiles and drones.

  13. 3D Image Acquisition System Based on Shape from Focus Technique

    Directory of Open Access Journals (Sweden)

    Pierre Gouton

    2013-04-01

    Full Text Available This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene.

  14. Pico-projector-based optical sectioning microscopy for 3D chlorophyll fluorescence imaging of mesophyll cells

    Science.gov (United States)

    Chen, Szu-Yu; Hsu, Yu John; Yeh, Chia-Hua; Chen, S.-Wei; Chung, Chien-Han

    2015-03-01

    A pico-projector-based optical sectioning microscope (POSM) was constructed using a pico-projector to generate structured illumination patterns. A net rate of 5.8 × 106 pixel/s and sub-micron spatial resolution in three-dimensions (3D) were achieved. Based on the pico-projector’s flexibility in pattern generation, the characteristics of POSM with different modulation periods and at different imaging depths were measured and discussed. With the application of different modulation periods, 3D chlorophyll fluorescence imaging of mesophyll cells was carried out in freshly plucked leaves of four species without sectioning or staining. For each leaf, an average penetration depth of 120 μm was achieved. Increasing the modulation period along with the increment of imaging depth, optical sectioning images can be obtained with a compromise between the axial resolution and signal-to-noise ratio. After ∼30 min imaging on the same area, photodamage was hardly observed. Taking the advantages of high speed and low damages of POSM, the investigation of the dynamic fluorescence responses to temperature changes was performed under three different treatment temperatures. The three embedded blue, green and red light-emitting diode light sources were applied to observe the responses of the leaves with different wavelength excitation.

  15. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Science.gov (United States)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  16. Technique: imaging earliest tooth development in 3D using a silver-based tissue contrast agent.

    Science.gov (United States)

    Raj, Muhammad T; Prusinkiewicz, Martin; Cooper, David M L; George, Belev; Webb, M Adam; Boughner, Julia C

    2014-02-01

    Looking in microscopic detail at the 3D organization of initiating teeth within the embryonic jaw has long-proved technologically challenging because of the radio-translucency of these tiny un-mineralized oral tissues. Yet 3D image data showing changes in the physical relationships among developing tooth and jaw tissues are vital to understand the coordinated morphogenesis of vertebrate teeth and jaws as an animal grows and as species evolve. Here, we present a new synchrotron-based scanning solution to image odontogenesis in 3D and in histological detail using a silver-based contrast agent. We stained fixed, intact wild-type mice aged embryonic (E) day 10 to birth with 1% Protargol-S at 37°C for 12-32 hr. Specimens were scanned at 4-10 µm pixel size at 28 keV, just above the silver K-edge, using micro-computed tomography (µCT) at the Canadian Light Source synchrotron. Synchrotron µCT scans of silver-stained embryos showed even the earliest visible stages of tooth initiation, as well as many other tissue types and structures, in histological detail. Silver stain penetration was optimal for imaging structures in intact embryos E15 and younger. This silver stain method offers a powerful yet straightforward approach to visualize at high-resolution and in 3D the earliest stages of odontogenesis in situ, and demonstrates the important of studying the tooth organ in all three planes of view. Copyright © 2013 Wiley Periodicals, Inc.

  17. Image-based Virtual Exhibit and Its Extension to 3D

    Institute of Scientific and Technical Information of China (English)

    Ming-Min Zhang; Zhi-Geng Pan; Li-Feng Ren; Peng Wang

    2007-01-01

    In this paper we introduce an image-based virtual exhibition system especially for clothing product. It can provide a powerful material substitution function, which is very useful for customization clothing-built. A novel color substitution algorithm and two texture morphing methods are designed to ensure realistic substitution result. To extend it to 3D, we need to do the model reconstruction based on photos. Thus we present an improved method for modeling human body. It deforms a generic model with shape details extracted from pictures to generate a new model. Our method begins with model image generation followed by silhouette extraction and segmentation. Then it builds a mapping between pixels inside every pair of silhouette segments in the model image and in the picture. Our mapping algorithm is based on a slice space representation that conforms to the natural features of human body.

  18. A flexible new method for 3D measurement based on multi-view image sequences

    Science.gov (United States)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  19. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    Science.gov (United States)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    comtinue to perform 3D correlation imaging for the redisual gravity data. After several iterations, we can obtain a satisfactoy results. Newly developed general purpose computing technology from Nvidia GPU (Graphics Processing Unit) has been put into practice and received widespread attention in many areas. Based on the GPU programming mode and two parallel levels, five CPU loops for the main computation of 3D correlation imaging are converted into three loops in GPU kernel functions, thus achieving GPU/CPU collaborative computing. The two inner loops are defined as the dimensions of blocks and the three outer loops are defined as the dimensions of threads, thus realizing the double loop block calculation. Theoretical and real gravity data tests show that results are reliable and the computing time is greatly reduced. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  20. 3D nanostructure reconstruction based on the SEM imaging principle, and applications.

    Science.gov (United States)

    Zhu, Fu-Yun; Wang, Qi-Qi; Zhang, Xiao-Sheng; Hu, Wei; Zhao, Xin; Zhang, Hai-Xia

    2014-05-09

    This paper addresses a novel 3D reconstruction method for nanostructures based on the scanning electron microscopy (SEM) imaging principle. In this method, the shape from shading (SFS) technique is employed, to analyze the gray-scale information of a single top-view SEM image which contains all the visible surface information, and finally to reconstruct the 3D surface morphology. It offers not only unobstructed observation from various angles but also the exact physical dimensions of nanostructures. A convenient and commercially available tool (NanoViewer) is developed based on this method for nanostructure analysis and characterization of properties. The reconstruction result coincides well with the SEM nanostructure image and is verified in different ways. With the extracted structure information, subsequent research of the nanostructure can be carried out, such as roughness analysis, optimizing properties by structure improvement and performance simulation with a reconstruction model. Efficient, practical and non-destructive, the method will become a powerful tool for nanostructure surface observation and characterization.

  1. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    Science.gov (United States)

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  2. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    Directory of Open Access Journals (Sweden)

    Zichun Zhong

    2016-01-01

    Full Text Available By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  3. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...

  4. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    Science.gov (United States)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  5. Superresolution of 3-D computational integral imaging based on moving least square method.

    Science.gov (United States)

    Kim, Hyein; Lee, Sukho; Ryu, Taekyung; Yoon, Jungho

    2014-11-17

    In this paper, we propose an edge directive moving least square (ED-MLS) based superresolution method for computational integral imaging reconstruction(CIIR). Due to the low resolution of the elemental images and the alignment error of the microlenses, it is not easy to obtain an accurate registration result in integral imaging, which makes it difficult to apply superresolution to the CIIR application. To overcome this problem, we propose the edge directive moving least square (ED-MLS) based superresolution method which utilizes the properties of the moving least square. The proposed ED-MLS based superresolution takes the direction of the edge into account in the moving least square reconstruction to deal with the abrupt brightness changes in the edge regions, and is less sensitive to the registration error. Furthermore, we propose a framework which shows how the data have to be collected for the superresolution problem in the CIIR application. Experimental results verify that the resolution of the elemental images is enhanced, and that a high resolution reconstructed 3-D image can be obtained with the proposed method.

  6. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  7. A hyperspectral images compression algorithm based on 3D bit plane transform

    Science.gov (United States)

    Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue

    2010-10-01

    According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.

  8. Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.

    Science.gov (United States)

    Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H

    2015-11-01

    Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.

  9. Method for 3D Image Representation with Reducing the Number of Frames based on Characteristics of Human Eyes

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2016-10-01

    Full Text Available Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency. Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6. Also, it can represent the 3D depth through visual perceptions. Thus, real time volume rendering can be done with the proposed method.

  10. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  11. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing.

    Science.gov (United States)

    Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin

    2016-06-22

    A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  12. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  13. Statistical Inverse Ray Tracing for Image-Based 3D Modeling.

    Science.gov (United States)

    Liu, Shubao; Cooper, David B

    2014-10-01

    This paper proposes a new formulation and solution to image-based 3D modeling (aka "multi-view stereo") based on generative statistical modeling and inference. The proposed new approach, named statistical inverse ray tracing, models and estimates the occlusion relationship accurately through optimizing a physically sound image generation model based on volumetric ray tracing. Together with geometric priors, they are put together into a Bayesian formulation known as Markov random field (MRF) model. This MRF model is different from typical MRFs used in image analysis in the sense that the ray clique, which models the ray-tracing process, consists of thousands of random variables instead of two to dozens. To handle the computational challenges associated with large clique size, an algorithm with linear computational complexity is developed by exploiting, using dynamic programming, the recursive chain structure of the ray clique. We further demonstrate the benefit of exact modeling and accurate estimation of the occlusion relationship by evaluating the proposed algorithm on several challenging data sets.

  14. EDGE BASED 3D INDOOR CORRIDOR MODELING USING A SINGLE IMAGE

    Directory of Open Access Journals (Sweden)

    A. Baligh Jahromi

    2015-08-01

    Full Text Available Reconstruction of spatial layout of indoor scenes from a single image is inherently an ambiguous problem. However, indoor scenes are usually comprised of orthogonal planes. The regularity of planar configuration (scene layout is often recognizable, which provides valuable information for understanding the indoor scenes. Most of the current methods define the scene layout as a single cubic primitive. This domain-specific knowledge is often not valid in many indoors where multiple corridors are linked each other. In this paper, we aim to address this problem by hypothesizing-verifying multiple cubic primitives representing the indoor scene layout. This method utilizes middle-level perceptual organization, and relies on finding the ground-wall and ceiling-wall boundaries using detected line segments and the orthogonal vanishing points. A comprehensive interpretation of these edge relations is often hindered due to shadows and occlusions. To handle this problem, the proposed method introduces virtual rays which aid in the creation of a physically valid cubic structure by using orthogonal vanishing points. The straight line segments are extracted from the single image and the orthogonal vanishing points are estimated by employing the RANSAC approach. Many scene layout hypotheses are created through intersecting random line segments and virtual rays of vanishing points. The created hypotheses are evaluated by a geometric reasoning-based objective function to find the best fitting hypothesis to the image. The best model hypothesis offered with the highest score is then converted to a 3D model. The proposed method is fully automatic and no human intervention is necessary to obtain an approximate 3D reconstruction.

  15. 3D IMAGE BASED GEOMETRIC DOCUMENTATION OF THE TOWER OF WINDS

    Directory of Open Access Journals (Sweden)

    M. S. Tryfona

    2016-06-01

    Full Text Available This paper describes and investigates the implementation of almost entirely image based contemporary techniques for the three dimensional geometric documentation of the Tower of the Winds in Athens, which is a unique and very special monument of the Roman era. These techniques and related algorithms were implemented using a well-known piece of commercial software with extreme caution in the selection of the various parameters. Problems related to data acquisition and processing, but also to the algorithms and to the software implementation are identified and discussed. The resulting point cloud has been georeferenced, i.e. referenced to a local Cartesian coordinate system through minimum geodetic measurements, and subsequently the surface, i.e. the mesh was created and finally the three dimensional textured model was produced. In this way, the geometric documentation drawings, i.e. the horizontal section plans, the vertical section plans and the elevations, which include orthophotos of the monument, can be produced at will from that 3D model, for the complete geometric documentation. Finally, a 3D tour of the Tower of the Winds has also been created for a more integrated view of the monument. The results are presented and are evaluated for their completeness, efficiency, accuracy and ease of production.

  16. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  17. Reconstruction of 3D ultrasound images based on Cyclic Regularized Savitzky-Golay filters.

    Science.gov (United States)

    Toonkum, Pollakrit; Suwanwela, Nijasri C; Chinrungrueng, Chedsada

    2011-02-01

    This paper presents a new three-dimensional (3D) ultrasound reconstruction algorithm for generation of 3D images from a series of two-dimensional (2D) B-scans acquired in the mechanical linear scanning framework. Unlike most existing 3D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the Cyclic Regularized Savitzky-Golay (CRSG) filter, is a new variant of the Savitzky-Golay (SG) smoothing filter. The CRSG filter has been improved upon the original SG filter in two respects: First, the cyclic indicator function has been incorporated into the least square cost function to enable the CRSG filter to approximate nonuniformly spaced data of the unobserved image intensities contained in unfilled voxels and reduce speckle noise of the observed image intensities contained in filled voxels. Second, the regularization function has been augmented to the least squares cost function as a mechanism to balance between the degree of speckle reduction and the degree of detail preservation. The CRSG filter has been evaluated and compared with the Voxel Nearest-Neighbor (VNN) interpolation post-processed by the Adaptive Speckle Reduction (ASR) filter, the VNN interpolation post-processed by the Adaptive Weighted Median (AWM) filter, the Distance-Weighted (DW) interpolation, and the Adaptive Distance-Weighted (ADW) interpolation, on reconstructing a synthetic 3D spherical image and a clinical 3D carotid artery bifurcation in the mechanical linear scanning framework. This preliminary evaluation indicates that the CRSG filter is more effective in both speckle reduction and geometric reconstruction of 3D ultrasound images than the other methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    Science.gov (United States)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  19. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  20. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  1. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Directory of Open Access Journals (Sweden)

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  2. Model-based segmentation and quantification of subcellular structures in 2D and 3D fluorescent microscopy images

    Science.gov (United States)

    Wörz, Stefan; Heinzer, Stephan; Weiss, Matthias; Rohr, Karl

    2008-03-01

    We introduce a model-based approach for segmenting and quantifying GFP-tagged subcellular structures of the Golgi apparatus in 2D and 3D microscopy images. The approach is based on 2D and 3D intensity models, which are directly fitted to an image within 2D circular or 3D spherical regions-of-interest (ROIs). We also propose automatic approaches for the detection of candidates, for the initialization of the model parameters, and for adapting the size of the ROI used for model fitting. Based on the fitting results, we determine statistical information about the spatial distribution and the total amount of intensity (fluorescence) of the subcellular structures. We demonstrate the applicability of our new approach based on 2D and 3D microscopy images.

  3. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  4. Terahertz imaging system based on bessel beams via 3D printed axicons at 100GHz

    Science.gov (United States)

    Liu, Changming; Wei, Xuli; Zhang, Zhongqi; Wang, Kejia; Yang, Zhenggang; Liu, Jinsong

    2014-11-01

    Terahertz (THz) imaging technology shows great advantage in nondestructive detection (NDT), since many optical opaque materials are transparent to THz waves. In this paper, we design and fabricate dielectric axicons to generate zeroth order-Bessel beams by 3D printing technology. We further present an all-electric THz imaging system using the generated Bessel beams in 100GHz. Resolution targets made of printed circuit board are imaged, and the results clearly show the extended depth of focus of Bessel beam, indicating the promise of Bessel beam for the THz NDT.

  5. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    Science.gov (United States)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  6. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2009-01-01

    Full Text Available Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed in this paper. This method divides both projection data and reconstructed image volume into subsets according to geometric symmetries in circular cone-beam projection layout, and a fast reconstruction for large data volume can be implemented by packing the subsets of projection data into the RGBA channels of GPU, performing the reconstruction chunk by chunk and combining the individual results in the end. The method is evaluated by reconstructing 3D images from computer-simulation data and real micro-CT data. Our results indicate that the GPU implementation can maintain original precision and speed up the reconstruction process by 110–120 times for circular cone-beam scan, as compared to traditional CPU implementation.

  7. REGION-BASED 3D SURFACE RECONSTRUCTION USING IMAGES ACQUIRED BY LOW-COST UNMANNED AERIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2015-08-01

    Full Text Available Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  8. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    Science.gov (United States)

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  9. Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine.

    Science.gov (United States)

    Lee, S; Lee, J; Lee, A; Park, N; Lee, S; Song, S; Seo, A; Lee, H; Kim, J-I; Eom, K

    2013-05-01

    Augmented reality (AR) is a technology which enables users to see the real world, with virtual objects superimposed upon or composited with it. AR simulators have been developed and used in human medicine, but not in veterinary medicine. The aim of this study was to develop an AR intravenous (IV) injection simulator to train veterinary and pre-veterinary students to perform canine venipuncture. Computed tomographic (CT) images of a beagle dog were scanned using a 64-channel multidetector. The CT images were transformed into volumetric data sets using an image segmentation method and were converted into a stereolithography format for creating 3D models. An AR-based interface was developed for an AR simulator for IV injection. Veterinary and pre-veterinary student volunteers were randomly assigned to an AR-trained group or a control group trained using more traditional methods (n = 20/group; n = 8 pre-veterinary students and n = 12 veterinary students in each group) and their proficiency at IV injection technique in live dogs was assessed after training was completed. Students were also asked to complete a questionnaire which was administered after using the simulator. The group that was trained using an AR simulator were more proficient at IV injection technique using real dogs than the control group (P ≤ 0.01). The students agreed that they learned the IV injection technique through the AR simulator. Although the system used in this study needs to be modified before it can be adopted for veterinary educational use, AR simulation has been shown to be a very effective tool for training medical personnel. Using the technology reported here, veterinary AR simulators could be developed for future use in veterinary education.

  10. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  11. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    Science.gov (United States)

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms.

  12. Surgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging

    OpenAIRE

    Elmi-Terander, Adrian; Skulason, Halldor; Söderman, Michael; Racadio, John; Homan, Robert; Babic, Drazenko; van der Vaart, Nijs; Nachabe, Rami

    2016-01-01

    Study Design. A cadaveric laboratory study. Objective. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Summary of Background Data. Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, with...

  13. Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz.

    Science.gov (United States)

    Wang, Qingzhu; Chen, Xiaoming; Wei, Mengying; Miao, Zhuang

    2016-11-04

    The existing techniques for simultaneous encryption and compression of images refer lossy compression. Their reconstruction performances did not meet the accuracy of medical images because most of them have not been applicable to three-dimensional (3D) medical image volumes intrinsically represented by tensors. We propose a tensor-based algorithm using tensor compressive sensing (TCS) to address these issues. Alternating least squares is further used to optimize the TCS with measurement matrices encrypted by discrete 3D Lorenz. The proposed method preserves the intrinsic structure of tensor-based 3D images and achieves a better balance of compression ratio, decryption accuracy, and security. Furthermore, the characteristic of the tensor product can be used as additional keys to make unauthorized decryption harder. Numerical simulation results verify the validity and the reliability of this scheme.

  14. Laser Based 3D Volumetric Display System

    Science.gov (United States)

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  15. 3D Imaging of Rapidly Spinning Space Targets Based on a Factorization Method.

    Science.gov (United States)

    Bi, Yanxian; Wei, Shaoming; Wang, Jun; Mao, Shiyi

    2017-02-14

    Three-dimensional (3D) imaging of space targets can provide crucial information about the target shape and size, which are significant supports for the application of automatic target classification and recognition. In this paper, a new 3D imaging of space spinning targets via a factorization method is proposed. Firstly, after the translational compensation, the scattering centers two-dimensional (2D) range and range-rate sequence induced by the target spinning is extracted using a high resolution spectral estimation technique. Secondly, measurement data association is implemented to obtain the scattering center trajectory matrix by using a range-Doppler tracker. Then, we use an initial coarse angular velocity to generate the projection matrix, which consists of the scattering centers range and cross-range, and a factorization method is applied iteratively to the projection matrix to estimate the accurate angular velocity. Finally, we use the accurate estimate spinning angular velocity to rescale the projection matrix and the well-scaled target 3D geometry is reconstructed. Compared to the previous literature methods, ambiguity in the spatial axes can be removed by this method. Simulation results have demonstrated the effectiveness and robustness of the proposed method.

  16. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    Science.gov (United States)

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  17. 3D Myocardial Contraction Imaging Based on Dynamic Grid Interpolation: Theory and Simulation Analysis

    Science.gov (United States)

    Bu, Shuhui; Shiina, Tsuyoshi; Yamakawa, Makoto; Takizawa, Hotaka

    Accurate assessment of local myocardial contraction is important for diagnosis of ischemic heart disease, because decreases of myocardial motion often appear in the early stages of the disease. Three-dimensional (3-D) assessment of the stiffness distribution is required for accurate diagnosis of ischemic heart disease. Since myocardium motion occurs radially within the left ventricle wall and the ultrasound beam propagates axially, conventional approaches, such as tissue Doppler imaging and strain-rate imaging techniques, cannot provide us with enough quantitative information about local myocardial contraction. In order to resolve this problem, we propose a novel myocardial contraction imaging system which utilizes the weighted phase gradient method, the extended combined autocorrelation method, and the dynamic grid interpolation (DGI) method. From the simulation results, we conclude that the strain image's accuracy and contrast have been improved by the proposed method.

  18. Structured light field 3D imaging.

    Science.gov (United States)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-05

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

  19. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  20. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Thanh N., E-mail: thanh.tran@merck.com [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Nguyen, Thanh T.; Willemsz, Tofan A. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Pharmaceutical Sciences and Clinical Supplies, Merck MSD, PO Box 20, 5340 BH Oss (Netherlands); Kessel, Gijs van [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Frijlink, Henderik W. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Voort Maarschalk, Kees van der [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Competence Center Process Technology, Purac Biochem, Gorinchem (Netherlands)

    2012-05-06

    Highlights: Black-Right-Pointing-Pointer We revised the DBSCAN algorithm for segmentation and clustering of large 3D image dataset and classified multivariate image. Black-Right-Pointing-Pointer The algorithm takes into account the coordinate system of the image data to improve the computational performance. Black-Right-Pointing-Pointer The algorithm solved the instability problem in boundaries detection of the original DBSCAN. Black-Right-Pointing-Pointer The segmentation results were successfully validated with synthetic 3D image and 3D XMT image of a pharmaceutical powder. - Abstract: Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image.

  1. Computed Tomography Image Origin Identification based on Original Sensor Pattern Noise and 3D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2016-06-08

    In this paper, we focus on the "blind" identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-Scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT-Scanner based on an Original Sensor Pattern Noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its 3D image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train an SVM based classifier so as to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than Sensor Pattern Noise (SPN) based strategy proposed for general public camera devices.

  2. Study of CT-based positron range correction in high resolution 3D PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cal-Gonzalez, J., E-mail: jacobo@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  3. Measurement of 3-D motion parameters of jacket launch based on stereo image sequences

    Institute of Scientific and Technical Information of China (English)

    HU Zhi-ping; OU Zong-ying; LIN Yan; LI Yun-feng

    2008-01-01

    To make sure that the process of jacket launch occurs in a semi-controlled manner, this paper deals with measurement of kinematic parameters of jacket launch using stereo vision and motion analysis. The system captured stereo image sequences by two separate CCD cameras, and then rebuilt 3D coordinates of the feature points to analyze the jacket launch motion. The possibility of combining stereo vision and motion analysis for measurement was examined. Results by experiments using scale model of jacket confirm the theoretical data.

  4. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    CERN Document Server

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  5. A LabVIEW based user-friendly nano-CT image alignment and 3D reconstruction platform

    CERN Document Server

    Wang, Shenghao; Wang, Zhili; Gao, Kun; Wu, Zhao; Zhu, Peiping; Wu, Ziyu

    2014-01-01

    X-ray nanometer computed tomography (nano-CT) offers applications and opportunities in many scientific researches and industrial areas. Here we present a user-friendly and fast LabVIEW based package running, after acquisition of the raw projection images, a procedure to obtain the inner structure of the sample under analysis. At first, a reliable image alignment procedure fixes possible misalignments among image series due to mechanical errors, thermal expansion and other external contributions, then a novel fast parallel beam 3D reconstruction performs the tomographic reconstruction. The remarkable improved reconstruction after the image calibration confirms the fundamental role of the image alignment procedure. It minimizes blurring and additional streaking artifacts present in a reconstructed slice that cause loss of information and faked structures in the observed material. The nano-CT image alignment and 3D reconstruction LabVIEW package significantly reducing the data process, makes faster and easier th...

  6. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    Science.gov (United States)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  7. Web-based interactive visualization of 3D video mosaics using X3D standard

    Institute of Scientific and Technical Information of China (English)

    CHON Jaechoon; LEE Yang-Won; SHIBASAKI Ryosuke

    2006-01-01

    We present a method of 3D image mosaicing for real 3D representation of roadside buildings, and implement a Web-based interactive visualization environment for the 3D video mosaics created by 3D image mosaicing. The 3D image mosaicing technique developed in our previous work is a very powerful method for creating textured 3D-GIS data without excessive data processing like the laser or stereo system. For the Web-based open access to the 3D video mosaics, we build an interactive visualization environment using X3D, the emerging standard of Web 3D. We conduct the data preprocessing for 3D video mosaics and the X3D modeling for textured 3D data. The data preprocessing includes the conversion of each frame of 3D video mosaics into concatenated image files that can be hyperlinked on the Web. The X3D modeling handles the representation of concatenated images using necessary X3D nodes. By employing X3D as the data format for 3D image mosaics, the real 3D representation of roadside buildings is extended to the Web and mobile service systems.

  8. Heat Equation to 3D Image Segmentation

    Directory of Open Access Journals (Sweden)

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  9. 3-D imaging and quantification of graupel porosity by synchrotron-based micro-tomography

    Directory of Open Access Journals (Sweden)

    F. Enzmann

    2011-10-01

    Full Text Available The air bubble structure is an important parameter to determine the radiation properties of graupel and hailstones. For 3-D imaging of this structure at micron resolution, a cryo-stage was developed. This stage was used at the tomography beamline of the Swiss Light Source (SLS synchrotron facility. The cryo-stage setup provides for the first time 3-D-data on the individual pore morphology of ice particles down to infrared wavelength resolution. In the present study, both sub-mm size natural and artificial ice particles rimed in a wind tunnel were investigated. In the natural rimed ice particles, Y-shaped air-filled closed pores were found. When kept for half an hour at −8 °C, this morphology transformed into smaller and more rounded voids well known from literature. Therefore, these round structures seem to represent an artificial rather than in situ pore structure, in contrast to the observed y-shaped structures found in the natural ice particles. Hence, for morphological studies on natural ice samples, special care must be taken to minimize any thermal cycling between sampling and measurement, with least artifact production at liquid nitrogen temperatures.

  10. Surgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging

    Science.gov (United States)

    Elmi-Terander, Adrian; Skulason, Halldor; Söderman, Michael; Racadio, John; Homan, Robert; Babic, Drazenko; van der Vaart, Nijs; Nachabe, Rami

    2016-01-01

    Study Design. A cadaveric laboratory study. Objective. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Summary of Background Data. Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, without the use of x-ray fluoroscopy, and thus opens the route to image-guided minimally invasive therapy in the thoracic spine. Methods. ARSN encompasses a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Two neurosurgeons placed 94 pedicle screws in the thoracic spine of four cadavers using ARSN on one side of the spine (47 screws) and free-hand technique on the contralateral side. X-ray fluoroscopy was not used for either technique. Four independent reviewers assessed the postoperative scans, using the Gertzbein grading. Morphometric measurements of the pedicles axial and sagittal widths and angles, as well as the vertebrae axial and sagittal rotations were performed to identify risk factors for breaches. Results. ARSN was feasible and superior to free-hand technique with respect to overall accuracy (85% vs. 64%, P < 0.05), specifically significant increases of perfectly placed screws (51% vs. 30%, P < 0.05) and reductions in breaches beyond 4 mm (2% vs. 25%, P < 0.05). All morphometric dimensions, except for vertebral body axial rotation, were risk factors for larger breaches when performed with the free-hand method. Conclusion. ARSN without fluoroscopy was feasible and demonstrated higher accuracy than free-hand technique for thoracic pedicle screw placement. Level of Evidence: N/A PMID:27513166

  11. 3D Curvelet-Based Segmentation and Quantification of Drusen in Optical Coherence Tomography Images

    Directory of Open Access Journals (Sweden)

    M. Esmaeili

    2017-01-01

    Full Text Available Spectral-Domain Optical Coherence Tomography (SD-OCT is a widely used interferometric diagnostic technique in ophthalmology that provides novel in vivo information of depth-resolved inner and outer retinal structures. This imaging modality can assist clinicians in monitoring the progression of Age-related Macular Degeneration (AMD by providing high-resolution visualization of drusen. Quantitative tools for assessing drusen volume that are indicative of AMD progression may lead to appropriate metrics for selecting treatment protocols. To address this need, a fully automated algorithm was developed to segment drusen area and volume from SD-OCT images. The proposed algorithm consists of three parts: (1 preprocessing, which includes creating binary mask and removing possible highly reflective posterior hyaloid that is used in accurate detection of inner segment/outer segment (IS/OS junction layer and Bruch’s membrane (BM retinal layers; (2 coarse segmentation, in which 3D curvelet transform and graph theory are employed to get the possible candidate drusenoid regions; (3 fine segmentation, in which morphological operators are used to remove falsely extracted elongated structures and get the refined segmentation results. The proposed method was evaluated in 20 publically available volumetric scans acquired by using Bioptigen spectral-domain ophthalmic imaging system. The average true positive and false positive volume fractions (TPVF and FPVF for the segmentation of drusenoid regions were found to be 89.15% ± 3.76 and 0.17% ± .18%, respectively.

  12. Cloud-Based Geospatial 3D Image Spaces—A Powerful Urban Model for the Smart City

    Directory of Open Access Journals (Sweden)

    Stephan Nebiker

    2015-10-01

    Full Text Available In this paper, we introduce the concept and an implementation of geospatial 3D image spaces as new type of native urban models. 3D image spaces are based on collections of georeferenced RGB-D imagery. This imagery is typically acquired using multi-view stereo mobile mapping systems capturing dense sequences of street level imagery. Ideally, image depth information is derived using dense image matching. This delivers a very dense depth representation and ensures the spatial and temporal coherence of radiometric and depth data. This results in a high-definition WYSIWYG (“what you see is what you get” urban model, which is intuitive to interpret and easy to interact with, and which provides powerful augmentation and 3D measuring capabilities. Furthermore, we present a scalable cloud-based framework for generating 3D image spaces of entire cities or states and a client architecture for their web-based exploitation. The model and the framework strongly support the smart city notion of efficiently connecting the urban environment and its processes with experts and citizens alike. In the paper we particularly investigate quality aspects of the urban model, namely the obtainable georeferencing accuracy and the quality of the depth map extraction. We show that our image-based georeferencing approach is capable of improving the original direct georeferencing accuracy by an order of magnitude and that the presented new multi-image matching approach is capable of providing high accuracies along with a significantly improved completeness of the depth maps.

  13. Geometry-based vs. intensity-based medical image registration: A comparative study on 3D CT data.

    Science.gov (United States)

    Savva, Antonis D; Economopoulos, Theodore L; Matsopoulos, George K

    2016-02-01

    Spatial alignment of Computed Tomography (CT) data sets is often required in numerous medical applications and it is usually achieved by applying conventional exhaustive registration techniques, which are mainly based on the intensity of the subject data sets. Those techniques consider the full range of data points composing the data, thus negatively affecting the required processing time. Alternatively, alignment can be performed using the correspondence of extracted data points from both sets. Moreover, various geometrical characteristics of those data points can be used, instead of their chromatic properties, for uniquely characterizing each point, by forming a specific geometrical descriptor. This paper presents a comparative study reviewing variations of geometry-based, descriptor-oriented registration techniques, as well as conventional, exhaustive, intensity-based methods for aligning three-dimensional (3D) CT data pairs. In this context, three general image registration frameworks were examined: a geometry-based methodology featuring three distinct geometrical descriptors, an intensity-based methodology using three different similarity metrics, as well as the commonly used Iterative Closest Point algorithm. All techniques were applied on a total of thirty 3D CT data pairs with both known and unknown initial spatial differences. After an extensive qualitative and quantitative assessment, it was concluded that the proposed geometry-based registration framework performed similarly to the examined exhaustive registration techniques. In addition, geometry-based methods dramatically improved processing time over conventional exhaustive registration.

  14. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    Science.gov (United States)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  15. IMAGE-BASED VIRTUAL TOURS AND 3D MODELING OF PAST AND CURRENT AGES FOR THE ENHANCEMENT OF ARCHAEOLOGICAL PARKS: THE VISUALVERSILIA 3D PROJECT

    Directory of Open Access Journals (Sweden)

    C. Castagnetti

    2017-05-01

    Full Text Available The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy. The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  16. Refraction-based 2D, 2.5D and 3D medical imaging: Stepping forward to a clinical trial

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Masami [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan)], E-mail: msm-ando@rs.noda.tus.ac.jp; Bando, Hiroko [Tsukuba University (Japan); Tokiko, Endo; Ichihara, Shu [Nagoya Medical Center (Japan); Hashimoto, Eiko [GUAS (Japan); Hyodo, Kazuyuki [KEK (Japan); Kunisada, Toshiyuki [Okayama University (Japan); Li Gang [BSRF (China); Maksimenko, Anton [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan); KEK (Japan); Mori, Kensaku [Nagoya University (Japan); Shimao, Daisuke [IPU (Japan); Sugiyama, Hiroshi [KEK (Japan); Yuasa, Tetsuya [Yamagata University (Japan); Ueno, Ei [Tsukuba University (Japan)

    2008-12-15

    An attempt at refraction-based 2D, 2.5D and 3D X-ray imaging of articular cartilage and breast carcinoma is reported. We are developing very high contrast X-ray 2D imaging with XDFI (X-ray dark-field imaging), X-ray CT whose data are acquired by DEI (diffraction-enhanced imaging) and tomosynthesis due to refraction contrast. 2D and 2.5D images were taken with nuclear plates or with X-ray films. Microcalcification of breast cancer and articular cartilage are clearly visible. 3D data were taken with an X-ray sensitive CCD camera. The 3D image was successfully reconstructed by the use of an algorithm newly made by our group. This shows a distinctive internal structure of a ductus lactiferi (milk duct) that contains inner wall, intraductal carcinoma and multifocal calcification in the necrotic core of the continuous DCIS (ductal carcinoma in situ). Furthermore consideration of clinical applications of these contrasts made us to try tomosynthesis. This attempt was satisfactory from the view point of articular cartilage image quality and the skin radiation dose.

  17. DiAna, an ImageJ tool for object-based 3D co-localization and distance analysis

    OpenAIRE

    2016-01-01

    International audience; We present a new plugin for ImageJ called DiAna, for Distance Analysis, which comes with a user-friendly interface. DiAna proposes robust and accurate 3D segmentation for object extraction. The plugin performs automated object-based co-localization and distance analysis. DiAna offers an in-depth analysis of co-localization between objects and retrieves 3D measurements including co-localizing volumes and surfaces of contact. It also computes the distribution of distance...

  18. Correlation based 3-D segmentation of the left ventricle in pediatric echocardiographic images using radio-frequency data.

    Science.gov (United States)

    Nillesen, Maartje M; Lopata, Richard G P; Huisman, H J; Thijssen, Johan M; Kapusta, Livia; de Korte, Chris L

    2011-09-01

    Clinical diagnosis of heart disease might be substantially supported by automated segmentation of the endocardial surface in three-dimensional (3-D) echographic images. Because of the poor echogenicity contrast between blood and myocardial tissue in some regions and the inherent speckle noise, automated analysis of these images is challenging. A priori knowledge on the shape of the heart cannot always be relied on, e.g., in children with congenital heart disease, segmentation should be based on the echo features solely. The objective of this study was to investigate the merit of using temporal cross-correlation of radio-frequency (RF) data for automated segmentation of 3-D echocardiographic images. Maximum temporal cross-correlation (MCC) values were determined locally from the RF-data using an iterative 3-D technique. MCC values as well as a combination of MCC values and adaptive filtered, demodulated RF-data were used as an additional, external force in a deformable model approach to segment the endocardial surface and were tested against manually segmented surfaces. Results on 3-D full volume images (Philips, iE33) of 10 healthy children demonstrate that MCC values derived from the RF signal yield a useful parameter to distinguish between blood and myocardium in regions with low echogenicity contrast and incorporation of MCC improves the segmentation results significantly. Further investigation of the MCC over the whole cardiac cycle is required to exploit the full benefit of it for automated segmentation.

  19. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Directory of Open Access Journals (Sweden)

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  20. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  1. Dixon imaging-based partial volume correction improves quantification of choline detected by breast 3D-MRSI

    Energy Technology Data Exchange (ETDEWEB)

    Minarikova, Lenka; Gruber, Stephan; Bogner, Wolfgang; Trattnig, Siegfried; Chmelik, Marek [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, MR Center of Excellence, Vienna (Austria); Pinker-Domenig, Katja; Baltzer, Pascal A.T.; Helbich, Thomas H. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria)

    2014-09-14

    Our aim was to develop a partial volume (PV) correction method of choline (Cho) signals detected by breast 3D-magnetic resonance spectroscopic imaging (3D-MRSI), using information from water/fat-Dixon MRI. Following institutional review board approval, five breast cancer patients were measured at 3 T. 3D-MRSI (1 cm{sup 3} resolution, duration ∝11 min) and Dixon MRI (1 mm{sup 3}, ∝2 min) were measured in vivo and in phantoms. Glandular/lesion tissue was segmented from water/fat-Dixon MRI and transformed to match the resolution of 3D-MRSI. The resulting PV values were used to correct Cho signals. Our method was validated on a two-compartment phantom (choline/water and oil). PV values were correlated with the spectroscopic water signal. Cho signal variability, caused by partial-water/fat content, was tested in 3D-MRSI voxels located in/near malignant lesions. Phantom measurements showed good correlation (r = 0.99) with quantified 3D-MRSI water signals, and better homogeneity after correction. The dependence of the quantified Cho signal on the water/fat voxel composition was significantly (p < 0.05) reduced using Dixon MRI-based PV correction, compared to the original uncorrected data (1.60-fold to 3.12-fold) in patients. The proposed method allows quantification of the Cho signal in glandular/lesion tissue independent of water/fat composition in breast 3D-MRSI. This can improve the reproducibility of breast 3D-MRSI, particularly important for therapy monitoring. (orig.)

  2. Registration of 2D to 3D joint images using phase-based mutual information

    Science.gov (United States)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  3. [Hyperspectral image classification based on 3-D gabor filter and support vector machines].

    Science.gov (United States)

    Feng, Xiao; Xiao, Peng-feng; Li, Qi; Liu, Xiao-xi; Wu, Xiao-cui

    2014-08-01

    A three-dimensional Gabor filter was developed for classification of hyperspectral remote sensing image. This method is based on the characteristics of hyperspectral image and the principle of texture extraction with 2-D Gabor filters. Three-dimensional Gabor filter is able to filter all the bands of hyperspectral image simultaneously, capturing the specific responses in different scales, orientations, and spectral-dependent properties from enormous image information, which greatly reduces the time consumption in hyperspectral image texture extraction, and solve the overlay difficulties of filtered spectrums. Using the designed three-dimensional Gabor filters in different scales and orientations, Hyperion image which covers the typical area of Qi Lian Mountain was processed with full bands to get 26 Gabor texture features and the spatial differences of Gabor feature textures corresponding to each land types were analyzed. On the basis of automatic subspace separation, the dimensions of the hyperspectral image were reduced by band index (BI) method which provides different band combinations for classification in order to search for the optimal magnitude of dimension reduction. Adding three-dimensional Gabor texture features successively according to its discrimination to the given land types, supervised classification was carried out with the classifier support vector machines (SVM). It is shown that the method using three-dimensional Gabor texture features and BI band selection based on automatic subspace separation for hyperspectral image classification can not only reduce dimensions; but also improve the classification accuracy and efficiency of hyperspectral image.

  4. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    Energy Technology Data Exchange (ETDEWEB)

    Dhou, S; Hurwitz, M; Lewis, J [Brigham and Women' s Hospital, Dana-Farber Cancer Center, Harvard Medical School, Boston, MA (United States); Mishra, P [Varian Medical Systems, Palo Alto, CA (United States)

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT-based

  5. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Directory of Open Access Journals (Sweden)

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  6. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  7. Proposed NRC portable target case for short-range triangulation-based 3D imaging systems characterization

    Science.gov (United States)

    Carrier, Benjamin; MacKinnon, David; Cournoyer, Luc; Beraldin, J.-Angelo

    2011-03-01

    The National Research Council of Canada (NRC) is currently evaluating and designing artifacts and methods to completely characterize 3-D imaging systems. We have gathered a set of artifacts to form a low-cost portable case and provide a clearly-defined set of procedures for generating characteristic values using these artifacts. In its current version, this case is specifically designed for the characterization of short-range (standoff distance of 1 centimeter to 3 meters) triangulation-based 3-D imaging systems. The case is known as the "NRC Portable Target Case for Short-Range Triangulation-based 3-D Imaging Systems" (NRC-PTC). The artifacts in the case have been carefully chosen for their geometric, thermal, and optical properties. A set of characterization procedures are provided with these artifacts based on procedures either already in use or are based on knowledge acquired from various tests carried out by the NRC. Geometric dimensioning and tolerancing (GD&T), a well-known terminology in the industrial field, was used to define the set of tests. The following parameters of a system are characterized: dimensional properties, form properties, orientation properties, localization properties, profile properties, repeatability, intermediate precision, and reproducibility. A number of tests were performed in a special dimensional metrology laboratory to validate the capability of the NRC-PTC. The NRC-PTC will soon be subjected to reproducibility testing using an intercomparison evaluation to validate its use in different laboratories.

  8. 3D mapping of buried underworld infrastructure using dynamic Bayesian network based multi-sensory image data fusion

    Science.gov (United States)

    Dutta, Ritaban; Cohn, Anthony G.; Muggleton, Jen M.

    2013-05-01

    The successful operation of buried infrastructure within urban environments is fundamental to the conservation of modern living standards. In this paper a novel multi-sensor image fusion framework has been proposed and investigated using dynamic Bayesian network for automatic detection of buried underworld infrastructure. Experimental multi-sensors images were acquired for a known buried plastic water pipe using Vibro-acoustic sensor based location methods and Ground Penetrating Radar imaging system. Computationally intelligent conventional image processing techniques were used to process three types of sensory images. Independently extracted depth and location information from different images regarding the target pipe were fused together using dynamic Bayesian network to predict the maximum probable location and depth of the pipe. The outcome from this study was very encouraging as it was able to detect the target pipe with high accuracy compared with the currently existing pipe survey map. The approach was also applied successfully to produce a best probable 3D buried asset map.

  9. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  10. 3D near-infrared imaging based on a single-photon avalanche diode array sensor

    NARCIS (Netherlands)

    Mata Pavia, J.; Charbon, E.; Wolf, M.

    2011-01-01

    An imager for optical tomography was designed based on a detector with 128x128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in w

  11. Model-based 3D segmentation of the bones of joints in medical images

    Science.gov (United States)

    Liu, Jiamin; Udupa, Jayaram K.; Saha, Punam K.; Odhner, Dewey; Hirsch, Bruce E.; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A.

    2005-04-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of acquired images of the joint under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. A model-based strategy is proposed in this paper wherein a rigid model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. In other images of the joint, this model is used to search for the same bone by minimizing an energy functional that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations yielding true positive and false positive volume fractions in the range 89-97% and 0.2-0.7%. The method requires 1-2 minutes of operator time and 6-7 minutes of computer time, which makes it significantly more efficient than live wire - the only method currently available for the task.

  12. Toward a real-time simulation of ultrasound image sequences based on a 3-D set of moving scatterers.

    Science.gov (United States)

    Marion, Adrien; Vray, Didier

    2009-10-01

    Data simulation is an important research tool to evaluate algorithms. Two types of methods are currently used to simulate medical ultrasound data: those based on acoustic models and those based on convolution models. The simulation of ultrasound data sequences is very time-consuming. In addition, many applications require accounting for the out-of-plane motion induced by the 3-D displacement of scatterers. The purpose of this paper is to propose a model adapted to a fast simulation of ultrasonic data sequences with 3-D moving scatterers. Our approach is based on the convolution model. The scatterers are moved in a 3-D continuous medium between each pair of images and then projected onto the imaging plane before being convolved. This paper discusses the practical implementation of the convolution that can be performed directly or after a grid approximation. The grid approximation convolution is obviously faster than the direct convolution but generates errors resulting from the approximation to the grid's nodes. We provide the analytical expression of these errors and then define 2 intensity-based criteria to quantify them as a function of the spatial sampling. The simulation of an image requires less than 2 s with oversampling, thus reducing these errors. The simulation model is validated with first- and second-order statistics. The positions of the scatterers at each imaging time can be provided by a displacement model. An example applied to flow imaging is proposed. Several cases are used to show that this displacement model provides realistic data. It is validated with speckle tracking, a well-known motion estimator in ultrasound imaging.

  13. Perception-based 3D tactile rendering from a single image for human skin examinations by dynamic touch.

    Science.gov (United States)

    Kim, K; Lee, S

    2015-05-01

    Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    Science.gov (United States)

    Wang, Sheng-Hao; Zhang, Kai; Wang, Zhi-Li; Gao, Kun; Wu, Zhao; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly, and fast software package based on LabVIEW that may allow us to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion, and other external factors has been considered, together with a novel fast parallel beam 3D reconstruction procedure that was developed ad hoc to perform the tomographic reconstruction. We have obtained remarkably improved reconstruction results at the Beijing Synchrotron Radiation Facility after the image calibration, the fundamental role of this image alignment procedure was confirmed, which minimizes the unwanted blurs and additional streaking artifacts that are always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure are fully based on LabVIEW routines, significantly reducing the data post-processing cycle, thus making the activity of the users faster and easier during experimental runs.

  15. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine.

    Science.gov (United States)

    Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei

    2015-04-11

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.

  16. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    Science.gov (United States)

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.

  17. A new navigation approach of terrain contour matching based on 3-D terrain reconstruction from onboard image sequence

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This article presents a passive navigation method of terrain contour matching by reconstructing the 3-D terrain from the image sequence(acquired by the onboard camera).To achieve automation and simultaneity of the image sequence processing for navigation,a correspondence registration method based on control points tracking is proposed which tracks the sparse control points through the whole image sequence and uses them as correspondence in the relation geometry solution.Besides,a key frame selection method based on the images overlapping ratio and intersecting angles is explored,thereafter the requirement for the camera system configuration is provided.The proposed method also includes an optimal local homography estimating algorithm according to the control points,which helps correctly predict points to be matched and their speed corresponding.Consequently,the real-time 3-D terrain of the trajectory thus reconstructed is matched with the referenced terrain map,and the result of which provides navigating information.The digital simulation experiment and the real image based experiment have verified the proposed method.

  18. A Chaos-based Image Encryption Scheme Using 3D Skew Tent Map and Coupled Map Lattice

    Directory of Open Access Journals (Sweden)

    Ruisong Ye

    2012-02-01

    Full Text Available This paper proposes a chaos-based image encryption scheme where one 3D skew tent map with three control parameters is utilized to generate chaotic orbits applied to scramble the pixel positions while one coupled map lattice is employed to yield random gray value sequences to change the gray values so as to enhance the security. Experimental results have been carried out with detailed analysis to demonstrate that the proposed image encryption scheme possesses large key space to resist brute-force attack and possesses good statistical properties to frustrate statistical analysis attacks. Experiments are also performed to illustrate the robustness against malicious attacks like cropping, noising, JPEG compression.

  19. Characterization Method for 3D Substructure of Nuclear Cell Based on Orthogonal Phase Images

    Directory of Open Access Journals (Sweden)

    Ying Ji

    2015-01-01

    Full Text Available A set of optical models associated with blood cells are introduced in this paper. All of these models are made up of different parts possessing symmetries. The wrapped phase images as well as the unwrapped ones from two orthogonal directions related to some of these models are obtained by simulation technique. Because the phase mutation occurs on the boundary between nucleus and cytoplasm as well as on the boundary between cytoplasm and environment medium, the equation of inflexion curve is introduced to describe the size, morphology, and substructure of the nuclear cell based on the analysis of the phase features of the model. Furthermore, a mononuclear cell model is discussed as an example to verify this method. The simulation result shows that characterization with inflexion curve based on orthogonal phase images could describe the substructure of the cells availably, which may provide a new way to identify the typical biological cells quickly without scanning.

  20. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    Energy Technology Data Exchange (ETDEWEB)

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K. [Univ. of Nebraska Medical Center, Omaha, NE (United States)

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  1. 3D Backscatter Imaging System

    Science.gov (United States)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  2. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    There is an increasing interest in using 3D computer vision in precision agriculture. This calls for better quantitative evaluation and understanding of computer vision methods. This paper proposes a test framework using ray traced crop scenes that allows in-depth analysis of algorithm performance...

  3. Fast algorithm for optimal graph-Laplacian based 3D image segmentation

    Science.gov (United States)

    Harizanov, S.; Georgiev, I.

    2016-10-01

    In this paper we propose an iterative steepest-descent-type algorithm that is observed to converge towards the exact solution of the ℓ0 discrete optimization problem, related to graph-Laplacian based image segmentation. Such an algorithm allows for significant additional improvements on the segmentation quality once the minimizer of the associated relaxed ℓ1 continuous optimization problem is computed, unlike the standard strategy of simply hard-thresholding the latter. Convergence analysis of the algorithm is not a subject of this work. Instead, various numerical experiments, confirming the practical value of the algorithm, are documented.

  4. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  5. Clinical Application of Solid Model Based on Trabecular Tibia Bone CT Images Created by 3D Printer.

    Science.gov (United States)

    Cho, Jaemo; Park, Chan-Soo; Kim, Yeoun-Jae; Kim, Kwang Gi

    2015-07-01

    The aim of this work is to use a 3D solid model to predict the mechanical loads of human bone fracture risk associated with bone disease conditions according to biomechanical engineering parameters. We used special image processing tools for image segmentation and three-dimensional (3D) reconstruction to generate meshes, which are necessary for the production of a solid model with a 3D printer from computed tomography (CT) images of the human tibia's trabecular and cortical bones. We examined the defects of the mechanism for the tibia's trabecular bones. Image processing tools and segmentation techniques were used to analyze bone structures and produce a solid model with a 3D printer. These days, bio-imaging (CT and magnetic resonance imaging) devices are able to display and reconstruct 3D anatomical details, and diagnostics are becoming increasingly vital to the quality of patient treatment planning and clinical treatment. Furthermore, radiographic images are being used to study biomechanical systems with several aims, namely, to describe and simulate the mechanical behavior of certain anatomical systems, to analyze pathological bone conditions, to study tissues structure and properties, and to create a solid model using a 3D printer to support surgical planning and reduce experimental costs. These days, research using image processing tools and segmentation techniques to analyze bone structures to produce a solid model with a 3D printer is rapidly becoming very important.

  6. 3D Assessment of Mandibular Growth Based on Image Registration: A Feasibility Study in a Rabbit Model

    Directory of Open Access Journals (Sweden)

    I. Kim

    2014-01-01

    Full Text Available Background. Our knowledge of mandibular growth mostly derives from cephalometric radiography, which has inherent limitations due to the two-dimensional (2D nature of measurement. Objective. To assess 3D morphological changes occurring during growth in a rabbit mandible. Methods. Serial cone-beam computerised tomographic (CBCT images were made of two New Zealand white rabbits, at baseline and eight weeks after surgical implantation of 1 mm diameter metallic spheres as fiducial markers. A third animal acted as an unoperated (no implant control. CBCT images were segmented and registered in 3D (Implant Superimposition and Procrustes Method, and the remodelling pattern described used color maps. Registration accuracy was quantified by the maximal of the mean minimum distances and by the Hausdorff distance. Results. The mean error for image registration was 0.37 mm and never exceeded 1 mm. The implant-based superimposition showed most remodelling occurred at the mandibular ramus, with bone apposition posteriorly and vertical growth at the condyle. Conclusion. We propose a method to quantitatively describe bone remodelling in three dimensions, based on the use of bone implants as fiducial markers and CBCT as imaging modality. The method is feasible and represents a promising approach for experimental studies by comparing baseline growth patterns and testing the effects of growth-modification treatments.

  7. Imaging a Sustainable Future in 3D

    Science.gov (United States)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  8. Novel methodology for 3D reconstruction of carotid arteries and plaque characterization based upon magnetic resonance imaging carotid angiography data.

    Science.gov (United States)

    Sakellarios, Antonis I; Stefanou, Kostas; Siogkas, Panagiotis; Tsakanikas, Vasilis D; Bourantas, Christos V; Athanasiou, Lambros; Exarchos, Themis P; Fotiou, Evangelos; Naka, Katerina K; Papafaklis, Michail I; Patterson, Andrew J; Young, Victoria E L; Gillard, Jonathan H; Michalis, Lampros K; Fotiadis, Dimitrios I

    2012-10-01

    In this study, we present a novel methodology that allows reliable segmentation of the magnetic resonance images (MRIs) for accurate fully automated three-dimensional (3D) reconstruction of the carotid arteries and semiautomated characterization of plaque type. Our approach uses active contours to detect the luminal borders in the time-of-flight images and the outer vessel wall borders in the T(1)-weighted images. The methodology incorporates the connecting components theory for the automated identification of the bifurcation region and a knowledge-based algorithm for the accurate characterization of the plaque components. The proposed segmentation method was validated in randomly selected MRI frames analyzed offline by two expert observers. The interobserver variability of the method for the lumen and outer vessel wall was -1.60%±6.70% and 0.56%±6.28%, respectively, while the Williams Index for all metrics was close to unity. The methodology implemented to identify the composition of the plaque was also validated in 591 images acquired from 24 patients. The obtained Cohen's k was 0.68 (0.60-0.76) for lipid plaques, while the time needed to process an MRI sequence for 3D reconstruction was only 30 s. The obtained results indicate that the proposed methodology allows reliable and automated detection of the luminal and vessel wall borders and fast and accurate characterization of plaque type in carotid MRI sequences. These features render the currently presented methodology a useful tool in the clinical and research arena.

  9. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    Science.gov (United States)

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics

  10. A NEW APPROACH FOR PROGRESSIVE DENSE RECONSTRUCTION FROM CONSECUTIVE IMAGES BASED ON PRIOR LOW-DENSITY 3D POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2017-09-01

    Full Text Available In recent years, the increasing incidence of climate-related disasters has tremendously affected our environment. In order to effectively manage and reduce dramatic impacts of such events, the development of timely disaster management plans is essential. Since these disasters are spatial phenomena, timely provision of geospatial information is crucial for effective development of response and management plans. Due to inaccessibility of the affected areas and limited budget of first-responders, timely acquisition of the required geospatial data for these applications is usually possible only using low-cost imaging and georefencing sensors mounted on unmanned platforms. Despite rapid collection of the required data using these systems, available processing techniques are not yet capable of delivering geospatial information to responders and decision makers in a timely manner. To address this issue, this paper introduces a new technique for dense 3D reconstruction of the affected scenes which can deliver and improve the needed geospatial information incrementally. This approach is implemented based on prior 3D knowledge of the scene and employs computationally-efficient 2D triangulation, feature descriptor, feature matching and point verification techniques to optimize and speed up 3D dense scene reconstruction procedure. To verify the feasibility and computational efficiency of the proposed approach, an experiment using a set of consecutive images collected onboard a UAV platform and prior low-density airborne laser scanning over the same area is conducted and step by step results are provided. A comparative analysis of the proposed approach and an available image-based dense reconstruction technique is also conducted to prove the computational efficiency and competency of this technique for delivering geospatial information with pre-specified accuracy.

  11. 3D imaging in forensic odontology.

    Science.gov (United States)

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  12. Multiplane 3D superresolution optical fluctuation imaging

    CERN Document Server

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  13. 3D ultrasound imaging in image-guided intervention.

    Science.gov (United States)

    Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare

    2014-01-01

    Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.

  14. Graph-cut Based Interactive Segmentation of 3D Materials-Science Images

    Science.gov (United States)

    2014-04-26

    problems in computational vision. J. Am. Stat. Assoc. 76–89 (1987) 31. Martin , D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural...Medical Image Computing and Computer-Assisted Intervention vol. 6893, pp. 603–610 (2011) 54. Unger, M., Pock, T., Trobin, W., Cremers , D., Bischof, H

  15. Miniaturized 3D microscope imaging system

    Science.gov (United States)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  16. Micro-computed tomography image-based evaluation of 3D anisotropy degree of polymer scaffolds.

    Science.gov (United States)

    Pérez-Ramírez, Ursula; López-Orive, Jesús Javier; Arana, Estanislao; Salmerón-Sánchez, Manuel; Moratal, David

    2015-01-01

    Anisotropy is one of the most meaningful determinants of biomechanical behaviour. This study employs micro-computed tomography (μCT) and image techniques for analysing the anisotropy of regenerative medicine polymer scaffolds. For this purpose, three three-dimensional anisotropy evaluation image methods were used: ellipsoid of inertia (EI), mean intercept length (MIL) and tensor scale (t-scale). These were applied to three patterns (a sphere, a cube and a right prism) and to two polymer scaffold topologies (cylindrical orthogonal pore mesh and spherical pores). For the patterns, the three methods provided good results. Regarding the scaffolds, EI mistook both topologies (0.0158, [-0.5683; 0.6001]; mean difference and 95% confidence interval), and MIL showed no significant differences (0.3509, [0.0656; 0.6362]). T-scale is the preferable method because it gave the best capability (0.3441, [0.1779; 0.5102]) to differentiate both topologies. This methodology results in the development of non-destructive tools to engineer biomimetic scaffolds, incorporating anisotropy as a fundamental property to be mimicked from the original tissue and permitting its assessment by means of μCT image analysis.

  17. Commissioning of a 3D image-based treatment planning system for high-dose-rate brachytherapy of cervical cancer.

    Science.gov (United States)

    Kim, Yongbok; Modrick, Joseph M; Pennington, Edward C; Kim, Yusung

    2016-03-08

    The objective of this work is to present commissioning procedures to clinically implement a three-dimensional (3D), image-based, treatment-planning system (TPS) for high-dose-rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8-1.0 mm on MRI when compared with X-rays. In-house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose-volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image-based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End-to-end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image-based TPS for HDR

  18. 3D integral imaging with optical processing

    Science.gov (United States)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  19. Numerical validation framework for micromechanical simulations based on synchrotron 3D imaging

    Science.gov (United States)

    Buljac, Ante; Shakoor, Modesar; Neggers, Jan; Bernacki, Marc; Bouchard, Pierre-Olivier; Helfen, Lukas; Morgeneyer, Thilo F.; Hild, François

    2017-03-01

    A combined computational-experimental framework is introduced herein to validate numerical simulations at the microscopic scale. It is exemplified for a flat specimen with central hole made of cast iron and imaged via in-situ synchrotron laminography at micrometer resolution during a tensile test. The region of interest in the reconstructed volume, which is close to the central hole, is analyzed by digital volume correlation (DVC) to measure kinematic fields. Finite element (FE) simulations, which account for the studied material microstructure, are driven by Dirichlet boundary conditions extracted from DVC measurements. Gray level residuals for DVC measurements and FE simulations are assessed for validation purposes.

  20. Numerical validation framework for micromechanical simulations based on synchrotron 3D imaging

    Science.gov (United States)

    Buljac, Ante; Shakoor, Modesar; Neggers, Jan; Bernacki, Marc; Bouchard, Pierre-Olivier; Helfen, Lukas; Morgeneyer, Thilo F.; Hild, François

    2016-11-01

    A combined computational-experimental framework is introduced herein to validate numerical simulations at the microscopic scale. It is exemplified for a flat specimen with central hole made of cast iron and imaged via in-situ synchrotron laminography at micrometer resolution during a tensile test. The region of interest in the reconstructed volume, which is close to the central hole, is analyzed by digital volume correlation (DVC) to measure kinematic fields. Finite element (FE) simulations, which account for the studied material microstructure, are driven by Dirichlet boundary conditions extracted from DVC measurements. Gray level residuals for DVC measurements and FE simulations are assessed for validation purposes.

  1. SEE-THROUGH IMAGING OF LASER-SCANNED 3D CULTURAL HERITAGE OBJECTS BASED ON STOCHASTIC RENDERING OF LARGE-SCALE POINT CLOUDS

    OpenAIRE

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; R. Umegaki; Wang, S; M. Uemura(Hiroshima Astrophysical Science Center, Hiroshima University); Okamoto, A; Koyamada, K.

    2016-01-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminatin...

  2. Intersection-based registration of slice stacks to form 3D images of the human fetal brain

    DEFF Research Database (Denmark)

    Kim, Kio; Hansen, Mads Fogtmann; Habas, Piotr;

    2008-01-01

    Clinical fetal MR imaging of the brain commonly makes use of fast 2D acquisitions of multiple sets of approximately orthogonal 2D slices. We and others have previously proposed an iterative slice-to-volume registration process to recover a geometrically consistent 3D image. However, these approac...

  3. Recommendations from gynaecological (GYN) GEC ESTRO working group (II): concepts and terms in 3D image-based treatment planning in cervix cancer brachytherapy-3D dose volume parameters and aspects of 3D image-based anatomy, radiation physics, radiobiology.

    Science.gov (United States)

    Pötter, Richard; Haie-Meder, Christine; Van Limbergen, Erik; Barillot, Isabelle; De Brabandere, Marisol; Dimopoulos, Johannes; Dumas, Isabelle; Erickson, Beth; Lang, Stefan; Nulens, An; Petrow, Peter; Rownd, Jason; Kirisits, Christian

    2006-01-01

    The second part of the GYN GEC ESTRO working group recommendations is focused on 3D dose-volume parameters for brachytherapy of cervical carcinoma. Methods and parameters have been developed and validated from dosimetric, imaging and clinical experience from different institutions (University of Vienna, IGR Paris, University of Leuven). Cumulative dose volume histograms (DVH) are recommended for evaluation of the complex dose heterogeneity. DVH parameters for GTV, HR CTV and IR CTV are the minimum dose delivered to 90 and 100% of the respective volume: D90, D100. The volume, which is enclosed by 150 or 200% of the prescribed dose (V150, V200), is recommended for overall assessment of high dose volumes. V100 is recommended for quality assessment only within a given treatment schedule. For Organs at Risk (OAR) the minimum dose in the most irradiated tissue volume is recommended for reporting: 0.1, 1, and 2 cm3; optional 5 and 10 cm3. Underlying assumptions are: full dose of external beam therapy in the volume of interest, identical location during fractionated brachytherapy, contiguous volumes and contouring of organ walls for >2 cm3. Dose values are reported as absorbed dose and also taking into account different dose rates. The linear-quadratic radiobiological model-equivalent dose (EQD2)-is applied for brachytherapy and is also used for calculating dose from external beam therapy. This formalism allows systematic assessment within one patient, one centre and comparison between different centres with analysis of dose volume relations for GTV, CTV, and OAR. Recommendations for the transition period from traditional to 3D image-based cervix cancer brachytherapy are formulated. Supplementary data (available in the electronic version of this paper) deals with aspects of 3D imaging, radiation physics, radiation biology, dose at reference points and dimensions and volumes for the GTV and CTV (adding to [Haie-Meder C, Pötter R, Van Limbergen E et al. Recommendations from

  4. Dose optimization in gynecological 3D image based interstitial brachytherapy using martinez universal perineal interstitial template (MUPIT -an institutional experience

    Directory of Open Access Journals (Sweden)

    Pramod Kumar Sharma

    2014-01-01

    Full Text Available The aim of this study was to evaluate the dose optimization in 3D image based gynecological interstitial brachytherapy using Martinez Universal Perineal Interstitial Template (MUPIT. Axial CT image data set of 20 patients of gynecological cancer who underwent external radiotherapy and high dose rate (HDR interstitial brachytherapy using MUPIT was employed to delineate clinical target volume (CTV and organs at risk (OARs. Geometrical and graphical optimization were done for optimum CTV coverage and sparing of OARs. Coverage Index (CI, dose homogeneity index (DHI, overdose index (OI, dose non-uniformity ratio (DNR, external volume index (EI, conformity index (COIN and dose volume parameters recommended by GEC-ESTRO were evaluated. The mean CTV, bladder and rectum volume were 137 ± 47cc, 106 ± 41cc and 50 ± 25cc, respectively. Mean CI, DHI and DNR were 0.86 ± 0.03, 0.69 ± 0.11 and 0.31 ± 0.09, while the mean OI, EI, and COIN were 0.08 ± 0.03, 0.07 ± 0.05 and 0.79 ± 0.05, respectively. The estimated mean CTV D90 was 76 ± 11Gy and D100 was 63 ± 9Gy. The different dosimetric parameters of bladder D2cc, D1cc and D0.1cc were 76 ± 11Gy, 81 ± 14Gy, and 98 ± 21Gy and of rectum/recto-sigmoid were 80 ± 17Gy, 85 ± 13Gy, and 124 ± 37Gy, respectively. Dose optimization yields superior coverage with optimal values of indices. Emerging data on 3D image based brachytherapy with reporting and clinical correlation of DVH parameters outcome is enterprizing and provides definite assistance in improving the quality of brachytherapy implants. DVH parameter for urethra in gynecological implants needs to be defined further.

  5. A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Zhu, Xinjun; Chen, Xia; Su, Yonggang; Cai, Yuanxue

    2016-11-01

    The orthogonal fringe projection technique has as wide as long practical application nowadays. In this paper, we propose a 3D shape retrieval method for orthogonal composite fringe projection based on a combination of variational image decomposition (VID) and variational mode decomposition (VMD). We propose a new image decomposition model to extract the orthogonal fringe. Then we introduce the VMD method to separate the horizontal and vertical fringe from the orthogonal fringe. Lastly, the 3D shape information is obtained by the differential 3D shape retrieval method (D3D). We test the proposed method on a simulated pattern and two actual objects with edges or abrupt changes in height, and compare with the recent, related and advanced differential 3D shape retrieval method (D3D) in terms of both quantitative evaluation and visual quality. The experimental results have demonstrated the validity of the proposed method.

  6. Improved image of intrusive bodies at Newberry Volcano, Oregon, based on 3D gravity modelling

    Energy Technology Data Exchange (ETDEWEB)

    Bonneville, Alain H.; Cladouhos, Trenton; Rose, Kelly K.; Schultz, Adam; Strickland, Christopher E.; Urquhart, Scott

    2017-02-15

    Beneath Newberry Volcano is one of the largest geothermal heat reservoirs in the western United States and it has been extensively studied for the last 40 years. Several magmatic intrusions have been recognized at depths between 2.5 and 8 km and some of them identified as suitable targets for enhanced geothermal energy and tested during two previous EGS campaigns. These subsurface structures have been intersected by three deep wells and imaged by various geophysical methods including seismic tomography and magnetotellurics. Although three high quality gravity surveys were completed between 2006 and 2010 as part of various projects, a complete synthesis and interpretation of the gravity data has not yet been performed. Regional gravity data also exist in the vicinity of the Newberry volcano and have been added to these surveys to constitute a dataset with a total of 1418 gravity measurements. When coupled with existing geologic and geophysical data and models, this new gravity dataset provides important constraints on the depth and contours of the magmatic bodies previously identified by other methods and thus greatly contributing to facilitate any future drilling and stimulation works. Using the initial structures discovered by seismic tomography, inversion of gravity data has been performed. Shape, density values and depths of various bodies were allowed to vary and three main bodies have been identified. Densities of the middle and lower intrusive bodies (~2.6-2.7 g/cm3) are consistent with rhyolite, basalt or granites. Modeled density of the near-surface caldera body match that of a low density tephra material and the density of the shallow ring structures contained in the upper kilometer correspond to that of welded tuff or low-density rhyolites. Modeled bodies are in reality a composite of thin layers; however, average densities of the modeled gravity bodies are in good agreement with the density log obtained in one well located on the western flank (well 55

  7. Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer

    Science.gov (United States)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2013-03-01

    In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.

  8. Full-field wing deformation measurement scheme for in-flight cantilever monoplane based on 3D digital image correlation

    Science.gov (United States)

    Li, Lei-Gang; Liang, Jin; Guo, Xiang; Guo, Cheng; Hu, Hao; Tang, Zheng-Zong

    2014-06-01

    In this paper, a new non-contact scheme, based on 3D digital image correlation technology, is presented to measure the full-field wing deformation of in-flight cantilever monoplanes. Because of the special structure of the cantilever wing, two conjugated camera groups, which are rigidly connected and calibrated to an ensemble respectively, are installed onto the vertical fin of the aircraft and record the whole measurement. First, a type of pre-stretched target and speckle pattern are designed to adapt the oblique camera view for accurate detection and correlation. Then, because the measurement cameras are swinging with the aircraft vertical trail all the time, a camera position self-correction method (using control targets sprayed on the back of the aircraft), is designed to orientate all the cameras’ exterior parameters to a unified coordinate system in real time. Besides, for the excessively inclined camera axis and the vertical camera arrangement, a weak correlation between the high position image and low position image occurs. In this paper, a new dual-temporal efficient matching method, combining the principle of seed point spreading, is proposed to achieve the matching of weak correlated images. A novel system is developed and a simulation test in the laboratory was carried out to verify the proposed scheme.

  9. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping o......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....

  10. 3D Reconstruction of NMR Images by LabVIEW

    Directory of Open Access Journals (Sweden)

    Peter IZAK

    2007-01-01

    Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.

  11. Dialog-Based 3D-Image Recognition Using a Domain Ontology

    Science.gov (United States)

    Hois, Joana; Wünstel, Michael; Bateman, John A.; Röfer, Thomas

    The combination of vision and speech, together with the resulting necessity for formal representations, builds a central component of an autonomous system. A robot that is supposed to navigate autonomously through space must be able to perceive its environment as automatically as possible. But each recognition system has its own inherent limits. Especially a robot whose task is to navigate through unknown terrain has to deal with unidentified or even unknown objects, thus compounding the recognition problem still further. The system described in this paper takes this into account by trying to identify objects based on their functionality where possible. To handle cases where recognition is insufficient, we examine here two further strategies: on the one hand, the linguistic reference and labeling of the unidentified objects and, on the other hand, ontological deduction. This approach then connects the probabilistic area of object recognition with the logical area of formal reasoning. In order to support formal reasoning, additional relational scene information has to be supplied by the recognition system. Moreover, for a sound ontological basis for these reasoning tasks, it is necessary to define a domain ontology that provides for the representation of real-world objects and their corresponding spatial relations in linguistic and physical respects. Physical spatial relations and objects are measured by the visual system, whereas linguistic spatial relations and objects are required for interactions with a user.

  12. 3D nanoscale imaging of biological samples with laboratory-based soft X-ray sources

    Science.gov (United States)

    Dehlinger, Aurélie; Blechschmidt, Anne; Grötzsch, Daniel; Jung, Robert; Kanngießer, Birgit; Seim, Christian; Stiel, Holger

    2015-09-01

    In microscopy, where the theoretical resolution limit depends on the wavelength of the probing light, radiation in the soft X-ray regime can be used to analyze samples that cannot be resolved with visible light microscopes. In the case of soft X-ray microscopy in the water-window, the energy range of the radiation lies between the absorption edges of carbon (at 284 eV, 4.36 nm) and oxygen (543 eV, 2.34 nm). As a result, carbon-based structures, such as biological samples, posses a strong absorption, whereas e.g. water is more transparent to this radiation. Microscopy in the water-window, therefore, allows the structural investigation of aqueous samples with resolutions of a few tens of nanometers and a penetration depth of up to 10μm. The development of highly brilliant laser-produced plasma-sources has enabled the transfer of Xray microscopy, that was formerly bound to synchrotron sources, to the laboratory, which opens the access of this method to a broader scientific community. The Laboratory Transmission X-ray Microscope at the Berlin Laboratory for innovative X-ray technologies (BLiX) runs with a laser produced nitrogen plasma that emits radiation in the soft X-ray regime. The mentioned high penetration depth can be exploited to analyze biological samples in their natural state and with several projection angles. The obtained tomogram is the key to a more precise and global analysis of samples originating from various fields of life science.

  13. Image-Based 3D Modeling as a Documentation Method for Zooarchaeological Remains in Waste-Related Contexts

    Directory of Open Access Journals (Sweden)

    Stella Macheridis

    2015-12-01

    Full Text Available During the last twenty years archaeology has experienced a technological revolution that spans scientific achievements and day-to-day practices. The tools and methods from this digital change have also strongly impacted archaeology. Image-based 3D modeling is becoming more common when documenting archaeological features but is still not implemented as standard in field excavation projects. When it comes to integrating zooarchaeological perspectives in the interpretational process in the field, this type of documentation is a powerful tool, especially regarding visualization related to reconstruction and resolution. Also, with the implementation of image-based 3D modeling, the use of digital documentation in the field has been proven to be time- and cost effective (e.g., De Reu et al. 2014; De Reu et al. 2013; Dellepiane et al. 2013; Verhoeven et al. 2012. Few studies have been published on the digital documentation of faunal remains in archaeological contexts. As a case study, the excavation of the infill of a clay bin from building 102 in the Neolithic settlement of Ҫatalhöyük is presented. Alongside traditional documentation, infill was photographed in sequence at each second centimeter of soil removal. The photographs were processed with Agisoft Photoscan. Seven models were made, enabling reconstruction of the excavation of this context. This technique can be a powerful documentation tool, including recording notes of zooarchaeological significance, such as markers of taphonomic processes. An important methodological advantage in this regard is the potential to measure bones in situ in for analysis after excavation.

  14. MARVIN : high speed 3D imaging for seedling classification

    NARCIS (Netherlands)

    Koenderink, N.J.J.P.; Wigham, M.L.I.; Golbach, F.B.T.F.; Otten, G.W.; Gerlich, R.J.H.; Zedde, van de H.J.

    2009-01-01

    The next generation of automated sorting machines for seedlings demands 3D models of the plants to be made at high speed and with high accuracy. In our system the 3D plant model is created based on the information of 24 RGB cameras. Our contribution is an image acquisition technique based on

  15. Feasibility of 3D harmonic contrast imaging

    NARCIS (Netherlands)

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; ten Cate, F.; de Jong, N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it

  16. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Science.gov (United States)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  17. Volumetric label-free imaging and 3D reconstruction of mammalian cochlea based on two-photon excitation fluorescence microscopy

    Science.gov (United States)

    Zhang, Xianzeng; Geng, Yang; Ye, Qing; Zhan, Zhenlin; Xie, Shusen

    2013-11-01

    The visualization of the delicate structure and spatial relationship of intracochlear sensory cells has relied on the laborious procedures of tissue excision, fixation, sectioning and staining for light and electron microscopy. Confocal microscopy is advantageous for its high resolution and deep penetration depth, yet disadvantageous due to the necessity of exogenous labeling. In this study, we present the volumetric imaging of rat cochlea without exogenous dyes using a near-infrared femtosecond laser as the excitation mechanism and endogenous two-photon excitation fluorescence (TPEF) as the contrast mechanism. We find that TPEF exhibits strong contrast, allowing cellular and even subcellular resolution imaging of the cochlea, differentiating cell types, visualizing delicate structures and the radial nerve fiber. Our results further demonstrate that 3D reconstruction rendered with z-stacks of optical sections enables better revealment of fine structures and spatial relationships, and easily performed morphometric analysis. The TPEF-based optical biopsy technique provides great potential for new and sensitive diagnostic tools for hearing loss or hearing disorders, especially when combined with fiber-based microendoscopy.

  18. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  19. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

    CERN Document Server

    Li, Ruijiang; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-01-01

    Purpose: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Methods: Given a set of volumetric images of a patient at N breathing phases as the training data, we perform deformable image registration between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, we can generate new DVFs, which, when applied on the reference image, lead to new volumetric images. We then can reconstruct a volumetric image from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. Our algorithm was implemented on graphics processing units...

  20. Highway 3D model from image and lidar data

    Science.gov (United States)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  1. Compression of 3D integral images using wavelet decomposition

    Science.gov (United States)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  2. 3-D Velocity Model of the Coachella Valley, Southern California Based on Explosive Shots from the Salton Seismic Imaging Project

    Science.gov (United States)

    Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2014-12-01

    We have analyzed explosive shot data from the 2011 Salton Seismic Imaging Project (SSIP) across a 2-D seismic array and 5 profiles in the Coachella Valley to produce a 3-D P-wave velocity model that will be used in calculations of strong ground shaking. Accurate maps of seismicity and active faults rely both on detailed geological field mapping and a suitable velocity model to accurately locate earthquakes. Adjoint tomography of an older version of the SCEC 3-D velocity model shows that crustal heterogeneities strongly influence seismic wave propagation from moderate earthquakes (Tape et al., 2010). These authors improve the crustal model and subsequently simulate the details of ground motion at periods of 2 s and longer for hundreds of ray paths. Even with improvements such as the above, the current SCEC velocity model for the Salton Trough does not provide a match of the timing or waveforms of the horizontal S-wave motions, which Wei et al. (2013) interpret as caused by inaccuracies in the shallow velocity structure. They effectively demonstrate that the inclusion of shallow basin structure improves the fit in both travel times and waveforms. Our velocity model benefits from the inclusion of known location and times of a subset of 126 shots detonated over a 3-week period during the SSIP. This results in an improved velocity model particularly in the shallow crust. In addition, one of the main challenges in developing 3-D velocity models is an uneven stations-source distribution. To better overcome this challenge, we also include the first arrival times of the SSIP shots at the more widely spaced Southern California Seismic Network (SCSN) in our inversion, since the layout of the SSIP is complementary to the SCSN. References: Tape, C., et al., 2010, Seismic tomography of the Southern California crust based on spectral-element and adjoint methods: Geophysical Journal International, v. 180, no. 1, p. 433-462. Wei, S., et al., 2013, Complementary slip distributions

  3. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  4. Full-field modal analysis during base motion excitation using high-speed 3D digital image correlation

    Science.gov (United States)

    Molina-Viedma, Ángel J.; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.

    2017-10-01

    In recent years, many efforts have been made to exploit full-field measurement optical techniques for modal identification. Three-dimensional digital image correlation using high-speed cameras has been extensively employed for this purpose. Modal identification algorithms are applied to process the frequency response functions (FRF), which relate the displacement response of the structure to the excitation force. However, one of the most common tests for modal analysis involves the base motion excitation of a structural element instead of force excitation. In this case, the relationship between response and excitation is typically based on displacements, which are known as transmissibility functions. In this study, a methodology for experimental modal analysis using high-speed 3D digital image correlation and base motion excitation tests is proposed. In particular, a cantilever beam was excited from its base with a random signal, using a clamped edge join. Full-field transmissibility functions were obtained through the beam and converted into FRF for proper identification, considering a single degree-of-freedom theoretical conversion. Subsequently, modal identification was performed using a circle-fit approach. The proposed methodology facilitates the management of the typically large amounts of data points involved in the DIC measurement during modal identification. Moreover, it was possible to determine the natural frequencies, damping ratios and full-field mode shapes without requiring any additional tests. Finally, the results were experimentally validated by comparing them with those obtained by employing traditional accelerometers, analytical models and finite element method analyses. The comparison was performed by using the quantitative indicator modal assurance criterion. The results showed a high level of correspondence, consolidating the proposed experimental methodology.

  5. 3D Shape Indexing and Retrieval Using Characteristics level images

    Directory of Open Access Journals (Sweden)

    Abdelghni Lakehal

    2012-05-01

    Full Text Available In this paper, we propose an improved version of the descriptor that we proposed before. The descriptor is based on a set of binary images extracted from the 3D model called level images noted LI. The set LI is often bulky, why we introduced the X-means technique to reduce its size instead of K-means used in the old version. A 2D binary image descriptor was introduced to extract the vectors descriptors of the 3D model. For a comparative study of two versions of the descriptor, we used the National Taiwan University (NTU database of 3D object.

  6. 3D laser imaging for concealed object identification

    Science.gov (United States)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  7. High resolution 3-D wavelength diversity imaging

    Science.gov (United States)

    Farhat, N. H.

    1981-09-01

    A physical optics, vector formulation of microwave imaging of perfectly conducting objects by wavelength and polarization diversity is presented. The results provide the theoretical basis for optimal data acquisition and three-dimensional tomographic image retrieval procedures. These include: (a) the selection of highly thinned (sparse) receiving array arrangements capable of collecting large amounts of information about remote scattering objects in a cost effective manner and (b) techniques for 3-D tomographic image reconstruction and display in which polarization diversity data is fully accounted for. Data acquisition employing a highly attractive AMTDR (Amplitude Modulated Target Derived Reference) technique is discussed and demonstrated by computer simulation. Equipment configuration for the implementation of the AMTDR technique is also given together with a measurement configuration for the implementation of wavelength diversity imaging in a roof experiment aimed at imaging a passing aircraft. Extension of the theory presented to 3-D tomographic imaging of passive noise emitting objects by spectrally selective far field cross-correlation measurements is also given. Finally several refinements made in our anechoic-chamber measurement system are shown to yield drastic improvement in performance and retrieved image quality.

  8. Neural Network Based 3D Surface Reconstruction

    Directory of Open Access Journals (Sweden)

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  9. Micromachined Ultrasonic Transducers for 3-D Imaging

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann

    such transducer arrays, capacitive micromachined ultrasonic transducer (CMUT) technology is chosen for this project. Properties such as high bandwidth and high design flexibility makes this an attractive transducer technology, which is under continuous development in the research community. A theoretical...... of state-of-the-art 3-D ultrasound systems. The focus is on row-column addressed transducer arrays. This previously sparsely investigated addressing scheme offers a highly reduced number of transducer elements, resulting in reduced transducer manufacturing costs and data processing. To produce......Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...

  10. Visual grading of 2D and 3D functional MRI compared with image-based descriptive measures

    Energy Technology Data Exchange (ETDEWEB)

    Ragnehed, Mattias [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Department of Medical and Health Sciences, Division of Radiological Sciences/Radiology, Faculty of Health Sciences, Linkoeping (Sweden); Leinhard, Olof Dahlqvist; Pihlsgaard, Johan; Lundberg, Peter [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Division of Radiological Sciences, Radiation Physics, IMH, Linkoeping (Sweden); Wirell, Staffan [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Soekjer, Hannibal; Faegerstam, Patrik [Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Jiang, Bo [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Smedby, Oerjan; Engstroem, Maria [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden)

    2010-03-15

    A prerequisite for successful clinical use of functional magnetic resonance imaging (fMRI) is the selection of an appropriate imaging sequence. The aim of this study was to compare 2D and 3D fMRI sequences using different image quality assessment methods. Descriptive image measures, such as activation volume and temporal signal-to-noise ratio (TSNR), were compared with results from visual grading characteristics (VGC) analysis of the fMRI results. Significant differences in activation volume and TSNR were not directly reflected by differences in VGC scores. The results suggest that better performance on descriptive image measures is not always an indicator of improved diagnostic quality of the fMRI results. In addition to descriptive image measures, it is important to include measures of diagnostic quality when comparing different fMRI data acquisition methods. (orig.)

  11. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    Science.gov (United States)

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  12. 3D-FIESTA MR images are useful in the evaluation of the endoscopic expanded endonasal approach for midline skull-base lesions.

    Science.gov (United States)

    Xie, Tao; Zhang, Xiao-Biao; Yun, Hong; Hu, Fan; Yu, Yong; Gu, Ye

    2011-01-01

    The endoscopic expanded endonasal approach (EEA) has been reported in literature as a useful tool to treat sellar, parasellar, suprasellar, and clival lesions. The endoscope permits a panoramic view rather than a narrow microscopic view, and this approach can reach the lesion without brain retraction and with minimal neurovascular manipulation. However, because of the narrow corridor, the preoperative evaluation of the lesions should be of high priority. 3D fast-imaging employing steady-state acquisition (3D-FIESTA) or constructive interference in steady state (CISS) MR imaging provides high spatial resolution in the small structures within the cisterns. Therefore, this technique may be useful for better preoperative planning in detecting optic nerve, oculomotor nerve, chiasma, infundibulum, pituitary stalk, and small vessels in sellar region. Here we used the 3D-FIESTA MR images to evaluate EEA for seven midline skull-base lesions. Our report showed that, when EEA was used to treat midline skull-base lesions, 3D-FIESTA MR images were valuable in the assessment of vital structures in and around the tumor-involved midline skull-base region. 3D-FIESTA MR images can help in making a better preoperative planning, locating the intraoperative structures, and reducing the surgical risks. Otherwise, this approach is helpful for the craniopharyngioma classification based on EEA.

  13. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging.

    Science.gov (United States)

    Tseng, Hubert; Gage, Jacob A; Shen, Tsaiwei; Haisler, William L; Neeley, Shane K; Shiao, Sue; Chen, Jianbo; Desai, Pujan K; Liao, Angela; Hebel, Chris; Raphael, Robert M; Becker, Jeanne L; Souza, Glauco R

    2015-09-14

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5'-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (-control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z' = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments.

  14. Validation Tests of Open-Source Procedures for Digital Camera Calibration and 3d Image-Based Modelling

    Science.gov (United States)

    Toschi, I.; Rivola, R.; Bertacchini, E.; Castagnetti, C.; Dubbini, M.; Capra, A.

    2013-07-01

    Among the many open-source software solutions recently developed for the extraction of point clouds from a set of un-oriented images, the photogrammetric tools Apero and MicMac (IGN, Institut Géographique National) aim to distinguish themselves by focusing on the accuracy and the metric content of the final result. This paper firstly aims at assessing the accuracy of the simplified and automated calibration procedure offered by the IGN tools. Results obtained with this procedure were compared with those achieved with a test-range calibration approach using a pre-surveyed laboratory test-field. Both direct and a-posteriori validation tests turned out successfully showing the stability and the metric accuracy of the process, even when low textured or reflective surfaces are present in the 3D scene. Afterwards, the possibility of achieving accurate 3D models from the subsequently extracted dense point clouds is also evaluated. Three different types of sculptural elements were chosen as test-objects and "ground-truth" data were acquired with triangulation laser scanners. 3D models derived from point clouds oriented with a simplified relative procedure show a suitable metric accuracy: all comparisons delivered a standard deviation of millimeter-level. The use of Ground Control Points in the orientation phase did not improve significantly the accuracy of the final 3D model, when a small figure-like corbel was used as test-object.

  15. Improving Segmentation of 3D Retina Layers Based on Graph Theory Approach for Low Quality OCT Images

    Directory of Open Access Journals (Sweden)

    Stankiewicz Agnieszka

    2016-06-01

    Full Text Available This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT. Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.

  16. Contrast Enhancement Method Based on Gray and Its Distance Double-Weighting Histogram Equalization for 3D CT Images of PCBs

    Directory of Open Access Journals (Sweden)

    Lei Zeng

    2016-01-01

    Full Text Available Cone beam computed tomography (CBCT is a new detection method for 3D nondestructive testing of printed circuit boards (PCBs. However, the obtained 3D image of PCBs exhibits low contrast because of several factors, such as the occurrence of metal artifacts and beam hardening, during the process of CBCT imaging. Histogram equalization (HE algorithms cannot effectively extend the gray difference between a substrate and a metal in 3D CT images of PCBs, and the reinforcing effects are insignificant. To address this shortcoming, this study proposes an image enhancement algorithm based on gray and its distance double-weighting HE. Considering the characteristics of 3D CT images of PCBs, the proposed algorithm uses gray and its distance double-weighting strategy to change the form of the original image histogram distribution, suppresses the grayscale of a nonmetallic substrate, and expands the grayscale of wires and other metals. The proposed algorithm also enhances the gray difference between a substrate and a metal and highlights metallic materials. The proposed algorithm can enhance the gray value of wires and other metals in 3D CT images of PCBs. It applies enhancement strategies of changing gray and its distance double-weighting mechanism to adapt to this particular purpose. The flexibility and advantages of the proposed algorithm are confirmed by analyses and experimental results.

  17. 3D quantitative phase imaging of neural networks using WDT

    Science.gov (United States)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  18. Lossless Compression of Medical Images Using 3D Predictors.

    Science.gov (United States)

    Lucas, Luis; Rodrigues, Nuno; Cruz, Luis; Faria, Sergio

    2017-06-09

    This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3D-MRP, is based on the principle of minimum rate predictors (MRP), which is one of the state-of-the-art lossless compression technologies, presented in the data compression literature. The main features of the proposed method include the use of 3D predictors, 3D-block octree partitioning and classification, volume-based optimisation and support for 16 bit-depth images. Experimental results demonstrate the efficiency of the 3D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8 bit and 16 bit-depth contents, respectively, when compared to JPEG-LS, JPEG2000, CALIC, HEVC, as well as other proposals based on MRP algorithm.

  19. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Directory of Open Access Journals (Sweden)

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  20. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  1. A colour image reproduction framework for 3D colour printing

    Science.gov (United States)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  2. Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching

    Science.gov (United States)

    Nam, Woo Hyun; Kang, Dong-Goo; Lee, Duhgoon; Lee, Jae Young; Ra, Jong Beom

    2012-01-01

    The registration of a three-dimensional (3D) ultrasound (US) image with a computed tomography (CT) or magnetic resonance image is beneficial in various clinical applications such as diagnosis and image-guided intervention of the liver. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment, and the success of this process strongly depends on the proper selection of initial transformation parameters. In this paper, we present an automatic feature-based affine registration procedure of 3D intra-operative US and pre-operative CT images of the liver. In the registration procedure, we first segment vessel lumens and the liver surface from a 3D B-mode US image. We then automatically estimate an initial registration transformation by using the proposed edge matching algorithm. The algorithm finds the most likely correspondences between the vessel centerlines of both images in a non-iterative manner based on a modified Viterbi algorithm. Finally, the registration is iteratively refined on the basis of the global affine transformation by jointly using the vessel and liver surface information. The proposed registration algorithm is validated on synthesized datasets and 20 clinical datasets, through both qualitative and quantitative evaluations. Experimental results show that automatic registration can be successfully achieved between 3D B-mode US and CT images even with a large initial misalignment.

  3. Progresses in 3D integral imaging with optical processing

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Corral, Manuel; Martinez-Cuenca, Raul; Saavedra, Genaro; Navarro, Hector; Pons, Amparo [Department of Optics. University of Valencia. Calle Doctor Moliner 50, E46 100, Burjassot (Spain); Javidi, Bahram [Electrical and Computer Engineering Department, University of Connecticut, Storrs, CT 06269-1157 (United States)], E-mail: manuel.martinez@uv.es

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  4. Fully Automatic 3D Reconstruction of Histological Images

    CERN Document Server

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  5. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance.

    Science.gov (United States)

    Dibildox, Gerardo; Baka, Nora; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro; van Walsum, Theo

    2014-09-01

    The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P>0.1) but did improve robustness with regards to the initialization of the 3D models. The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  6. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  7. An image-based approach to the reconstruction of ancient architectures by extracting and arranging 3D spatial components

    Institute of Scientific and Technical Information of China (English)

    Divya Udayan J; HyungSeok KIM; Jee-In KIM

    2015-01-01

    The objective of this research is the rapid reconstruction of ancient buildings of historical importance using a single image. The key idea of our approach is to reduce the infi nite solutions that might otherwise arise when recovering a 3D geometry from 2D photographs. The main outcome of our research shows that the proposed methodology can be used to reconstruct ancient monuments for use as proxies for digital effects in applications such as tourism, games, and entertainment, which do not require very accurate modeling. In this article, we consider the reconstruction of ancient Mughal architecture including the Taj Mahal. We propose a modeling pipeline that makes an easy reconstruction possible using a single photograph taken from a single view, without the need to create complex point clouds from multiple images or the use of laser scanners. First, an initial model is automatically reconstructed using locally fi tted planar primitives along with their boundary polygons and the adjacency relation among parts of the polygons. This approach is faster and more accurate than creating a model from scratch because the initial reconstruction phase provides a set of structural information together with the adjacency relation, which makes it possible to estimate the approximate depth of the entire structural monument. Next, we use manual extrapolation and editing techniques with modeling software to assemble and adjust different 3D components of the model. Thus, this research opens up the opportunity for the present generation to experience remote sites of architectural and cultural importance through virtual worlds and real-time mobile applications. Variations of a recreated 3D monument to represent an amalgam of various cultures are targeted for future work.

  8. Fabrication of Large-Scale Microlens Arrays Based on Screen Printing for Integral Imaging 3D Display.

    Science.gov (United States)

    Zhou, Xiongtu; Peng, Yuyan; Peng, Rong; Zeng, Xiangyao; Zhang, Yong-Ai; Guo, Tailiang

    2016-09-14

    The low-cost large-scale fabrication of microlens arrays (MLAs) with precise alignment, great uniformity of focusing, and good converging performance are of great importance for integral imaging 3D display. In this work, a simple and effective method for large-scale polymer microlens arrays using screen printing has been successfully presented. The results show that the MLAs possess high-quality surface morphology and excellent optical performances. Furthermore, the microlens' shape and size, i.e., the diameter, the height, and the distance between two adjacent microlenses of the MLAs can be easily controlled by modifying the reflowing time and the size of open apertures of the screen. MLAs with the neighboring microlenses almost tangent can be achieved under suitable size of open apertures of the screen and reflowing time, which can remarkably reduce the color moiré patterns caused by the stray light between the blank areas of the MLAs in the integral imaging 3D display system, exhibiting much better reconstruction performance.

  9. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    Science.gov (United States)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  10. A Texture Analysis of 3D Radar Images

    NARCIS (Netherlands)

    Deiana, D.; Yarovoy, A.

    2009-01-01

    In this paper a texture feature coding method to be applied to high-resolution 3D radar images in order to improve target detection is developed. An automatic method for image segmentation based on texture features is proposed. The method has been able to automatically detect weak targets which fail

  11. Building 3D scenes from 2D image sequences

    Science.gov (United States)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  12. Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation

    Science.gov (United States)

    Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.

    2014-05-01

    Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.

  13. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    Science.gov (United States)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-07-21

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  14. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  15. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  16. Ultrasonic Sensor Based 3D Mapping & Localization

    Directory of Open Access Journals (Sweden)

    Shadman Fahim Ahmad

    2016-04-01

    Full Text Available This article provides a basic level introduction to 3D mapping using sonar sensors and localization. It describes the methods used to construct a low-cost autonomous robot along with the hardware and software used as well as an insight to the background of autonomous robotic 3D mapping and localization. We have also given an overview to what the future prospects of the robot may hold in 3D based mapping.

  17. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    Science.gov (United States)

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm.

  18. Surgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging: A Spine Cadaveric Feasibility and Accuracy Study.

    Science.gov (United States)

    Elmi-Terander, Adrian; Skulason, Halldor; Söderman, Michael; Racadio, John; Homan, Robert; Babic, Drazenko; van der Vaart, Nijs; Nachabe, Rami

    2016-11-01

    A cadaveric laboratory study. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, without the use of x-ray fluoroscopy, and thus opens the route to image-guided minimally invasive therapy in the thoracic spine. ARSN encompasses a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Two neurosurgeons placed 94 pedicle screws in the thoracic spine of four cadavers using ARSN on one side of the spine (47 screws) and free-hand technique on the contralateral side. X-ray fluoroscopy was not used for either technique. Four independent reviewers assessed the postoperative scans, using the Gertzbein grading. Morphometric measurements of the pedicles axial and sagittal widths and angles, as well as the vertebrae axial and sagittal rotations were performed to identify risk factors for breaches. ARSN was feasible and superior to free-hand technique with respect to overall accuracy (85% vs. 64%, P < 0.05), specifically significant increases of perfectly placed screws (51% vs. 30%, P < 0.05) and reductions in breaches beyond 4 mm (2% vs. 25%, P < 0.05). All morphometric dimensions, except for vertebral body axial rotation, were risk factors for larger breaches when performed with the free-hand method. ARSN without fluoroscopy was feasible and demonstrated higher accuracy than free-hand technique for thoracic pedicle screw placement. N/A.

  19. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    Energy Technology Data Exchange (ETDEWEB)

    Bertrand, M.M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Macri, F.; Beregi, J.P. [Nimes University Hospital, University Montpellier 1, Radiology Department, Nimes (France); Mazars, R.; Prudhomme, M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Nimes University Hospital, University Montpellier 1, Digestive Surgery Department, Nimes (France); Droupy, S. [Nimes University Hospital, University Montpellier 1, Urology-Andrology Department, Nimes (France)

    2014-08-15

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  20. Interactive visualization of multiresolution image stacks in 3D.

    Science.gov (United States)

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  1. 3D contrast enhancement-MR angiography for imaging of unruptured cerebral aneurysms: a hospital-based prevalence study.

    Directory of Open Access Journals (Sweden)

    Jing Li

    Full Text Available Contrast enhanced MRA (CE-MRA can help to overcome the limitations of other techniques to clearly display the details of cerebral aneurysms at 1.5-T MR system. We investigated the prevalence of unruptured cerebral aneurysms (UCAs using three dimensional (3D CE-MRA in a tertiary comprehensive hospital in China.The cases were prospectively recorded at our hospital between February 2009 and October 2010. 3D CE-MRA, interpreted by 2 observers blinded to the participants' information, was used to identify the location and size of UCAs and to estimate the overall, age-specific, and sex-specific prevalence.Of the 3993 patients (men: women = 2159:1834, 408 UCAs were found in 350 patients (men: women = 151:199. The prevalence was 8.8% overall (95% CI, 8.0-10.0%, with 7.0% for men (CI, 6.0-8.0% and 10.9% for women (CI, 9.0-12.0%. The overall prevalence of UCAs was higher in women than in men (P<0.001 and increased with age both in men and women. Prevalence peaked at age group 75-80 years. Forty-two patients (11.7% had multiple aneurysms, including 10 (2.9% male patients and 32 (9.1% female patients. The most common site of aneurysm was the carotid siphon, and most lesions (71.3% had a maximum diameter of 3-5 mm.This hospital-based prevalence study suggested a high prevalence (8.8% of UCAs and most lesions (71.3% had a maximum diameter of 3-5 mm observed by 3D CE-MRA. Because the rupture of small cerebral aneurysms was not uncommon, an appropriate follow-up care strategy must be formulated.

  2. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    Science.gov (United States)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  3. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Science.gov (United States)

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  4. DISOCCLUSION OF 3D LIDAR POINT CLOUDS USING RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    P. Biasutti

    2017-05-01

    Full Text Available This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS. Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor’s topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  5. Disocclusion of 3d LIDAR Point Clouds Using Range Images

    Science.gov (United States)

    Biasutti, P.; Aujol, J.-F.; Brédif, M.; Bugeau, A.

    2017-05-01

    This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor's topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  6. Medical image segmentation using 3D MRI data

    Science.gov (United States)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  7. Density-Based 3D Shape Descriptors

    Directory of Open Access Journals (Sweden)

    Schmitt Francis

    2007-01-01

    Full Text Available We propose a novel probabilistic framework for the extraction of density-based 3D shape descriptors using kernel density estimation. Our descriptors are derived from the probability density functions (pdf of local surface features characterizing the 3D object geometry. Assuming that the shape of the 3D object is represented as a mesh consisting of triangles with arbitrary size and shape, we provide efficient means to approximate the moments of geometric features on a triangle basis. Our framework produces a number of 3D shape descriptors that prove to be quite discriminative in retrieval applications. We test our descriptors and compare them with several other histogram-based methods on two 3D model databases, Princeton Shape Benchmark and Sculpteur, which are fundamentally different in semantic content and mesh quality. Experimental results show that our methodology not only improves the performance of existing descriptors, but also provides a rigorous framework to advance and to test new ones.

  8. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  9. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Directory of Open Access Journals (Sweden)

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  10. 2D/3D Image Registration using Regression Learning.

    Science.gov (United States)

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-09-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.

  11. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  12. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    Science.gov (United States)

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  13. Progress in 3D imaging and display by integral imaging

    Science.gov (United States)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  14. Design, fabrication, and implementation of voxel-based 3D printed textured phantoms for task-based image quality assessment in CT

    Science.gov (United States)

    Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan

    2016-03-01

    In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2

  15. A Integrated Service Platform for Remote Sensing Image 3D Interpretation and Draughting based on HTML5

    Science.gov (United States)

    LIU, Yiping; XU, Qing; ZhANG, Heng; LV, Liang; LU, Wanjie; WANG, Dandi

    2016-11-01

    The purpose of this paper is to solve the problems of the traditional single system for interpretation and draughting such as inconsistent standards, single function, dependence on plug-ins, closed system and low integration level. On the basis of the comprehensive analysis of the target elements composition, map representation and similar system features, a 3D interpretation and draughting integrated service platform for multi-source, multi-scale and multi-resolution geospatial objects is established based on HTML5 and WebGL, which not only integrates object recognition, access, retrieval, three-dimensional display and test evaluation but also achieves collection, transfer, storage, refreshing and maintenance of data about Geospatial Objects and shows value in certain prospects and potential for growth.

  16. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  17. Imaging and 3D morphological analysis of collagen fibrils.

    Science.gov (United States)

    Altendorf, H; Decencière, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

    2012-08-01

    The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  18. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images.

    Science.gov (United States)

    Jonić, S; Thévenaz, P; Zheng, G; Nolte, L-P; Unser, M

    2006-01-01

    We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  19. PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    E. Maset

    2017-08-01

    Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  20. Photogrammetric 3d Building Reconstruction from Thermal Images

    Science.gov (United States)

    Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.

    2017-08-01

    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  1. ONLY IMAGE BASED FOR THE 3D METRIC SURVEY OF GOTHIC STRUCTURES BY USING FRAME CAMERAS AND PANORAMIC CAMERAS

    OpenAIRE

    Pérez Ramos, A.; G. Robleda Prieto

    2016-01-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization me...

  2. 3D/2D Registration of medical images

    OpenAIRE

    Tomaževič, D.

    2008-01-01

    The topic of this doctoral dissertation is registration of 3D medical images to corresponding projective 2D images, referred to as 3D/2D registration. There are numerous possible applications of 3D/2D registration in image-aided diagnosis and treatment. In most of the applications, 3D/2D registration provides the location and orientation of the structures in a preoperative 3D CT or MR image with respect to intraoperative 2D X-ray images. The proposed doctoral dissertation tries to find origin...

  3. Image-based analysis of the internal microstructure of bone replacement scaffolds fabricated by 3D printing

    Science.gov (United States)

    Irsen, Stephan H.; Leukers, Barbara; Bruckschen, Björn; Tille, Carsten; Seitz, Hermann; Beckmann, Felix; Müller, Bert

    2006-08-01

    Rapid Prototyping and especially the 3D printing, allows generating complex porous ceramic scaffolds directly from powders. Furthermore, these technologies allow manufacturing patient-specific implants of centimeter size with an internal pore network to mimic bony structures including vascularization. Besides the biocompatibility properties of the base material, a high degree of open, interconnected porosity is crucial for the success of the synthetic bone graft. Pores with diameters between 100 and 500 μm are the prerequisite for vascularization to supply the cells with nutrients and oxygen, because simple diffusion transport is ineffective. The quantification of porosity on the macro-, micro-, and nanometer scale using well-established techniques such as Hg-porosimetry and electron microscopy is restricted. Alternatively, we have applied synchrotron-radiation-based micro computed tomography (SRμCT) to determine the porosity with high precision and to validate the macroscopic internal structure of the scaffold. We report on the difficulties in intensity-based segmentation for nanoporous materials but we also elucidate the power of SRμCT in the quantitative analysis of the pores at the different length scales.

  4. Combined aerial and terrestrial images for complete 3D documentation of Singosari Temple based on Structure from Motion algorithm

    Science.gov (United States)

    Hidayat, Husnul; Cahyono, A. B.

    2016-11-01

    Singosaritemple is one of cultural heritage building in East Java, Indonesia which was built in 1300s and restorated in 1934-1937. Because of its history and importance, complete documentation of this temple is required. Nowadays with the advent of low cost UAVs combining aerial photography with terrestrial photogrammetry gives more complete data for 3D documentation. This research aims to make complete 3D model of this landmark from aerial and terrestrial photographs with Structure from Motion algorithm. To establish correct scale, position, and orientation, the final 3D model was georeferenced with Ground Control Points in UTM 49S coordinate system. The result shows that all facades, floor, and upper structures can be modeled completely in 3D. In terms of 3D coordinate accuracy, the Root Mean Square Errors (RMSEs) are RMSEx=0,041 m; RMSEy=0,031 m; RMSEz=0,049 m which represent 0.071 m displacement in 3D space. In addition the mean difference of lenght measurements of the object is 0,057 m. With this accuracy, this method can be used to map the site up to 1:237 scale. Although the accuracy level is still in centimeters, the combined aerial and terrestrial photographs with Structure from Motion algorithm can provide complete and visually interesting 3D model.

  5. A system for finding a 3D target without a 3D image

    Science.gov (United States)

    West, Jay B.; Maurer, Calvin R., Jr.

    2008-03-01

    We present here a framework for a system that tracks one or more 3D anatomical targets without the need for a preoperative 3D image. Multiple 2D projection images are taken using a tracked, calibrated fluoroscope. The user manually locates each target on each of the fluoroscopic views. A least-squares minimization algorithm triangulates the best-fit position of each target in the 3D space of the tracking system: using the known projection matrices from 3D space into image space, we use matrix minimization to find the 3D position that projects closest to the located target positions in the 2D images. A tracked endoscope, whose projection geometry has been pre-calibrated, is then introduced to the operating field. Because the position of the targets in the tracking space is known, a rendering of the targets may be projected onto the endoscope view, thus allowing the endoscope to be easily brought into the target vicinity even when the endoscope field of view is blocked, e.g. by blood or tissue. An example application for such a device is trauma surgery, e.g., removal of a foreign object. Time, scheduling considerations and concern about excessive radiation exposure may prohibit the acquisition of a 3D image, such as a CT scan, which is required for traditional image guidance systems; it is however advantageous to have 3D information about the target locations available, which is not possible using fluoroscopic guidance alone.

  6. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  7. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    Science.gov (United States)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  8. Validation of MRI-based 3D digital atlas registration with histological and autoradiographic volumes: an anatomofunctional transgenic mouse brain imaging study.

    Science.gov (United States)

    Lebenberg, J; Hérard, A-S; Dubois, A; Dauguet, J; Frouin, V; Dhenain, M; Hantraye, P; Delzescaux, T

    2010-07-01

    Murine models are commonly used in neuroscience to improve our knowledge of disease processes and to test drug effects. To accurately study neuroanatomy and brain function in small animals, histological staining and ex vivo autoradiography remain the gold standards to date. These analyses are classically performed by manually tracing regions of interest, which is time-consuming. For this reason, only a few 2D tissue sections are usually processed, resulting in a loss of information. We therefore proposed to match a 3D digital atlas with previously 3D-reconstructed post mortem data to automatically evaluate morphology and function in mouse brain structures. We used a freely available MRI-based 3D digital atlas derived from C57Bl/6J mouse brain scans (9.4T). The histological and autoradiographic volumes used were obtained from a preliminary study in APP(SL)/PS1(M146L) transgenic mice, models of Alzheimer's disease, and their control littermates (PS1(M146L)). We first deformed the original 3D MR images to match our experimental volumes. We then applied deformation parameters to warp the 3D digital atlas to match the data to be studied. The reliability of our method was qualitatively and quantitatively assessed by comparing atlas-based and manual segmentations in 3D. Our approach yields faster and more robust results than standard methods in the investigation of post mortem mouse data sets at the level of brain structures. It also constitutes an original method for the validation of an MRI-based atlas using histology and autoradiography as anatomical and functional references, respectively.

  9. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix...

  10. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    Science.gov (United States)

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  11. 3D Simulation of Lunar Surface Based on Single Image%基于单张影像的月面三维仿真

    Institute of Scientific and Technical Information of China (English)

    徐鹍; 周杨; 滕飞; 李建胜

    2012-01-01

    A fast method of 3D reconstruction and illumination simulation of lunar surface based on single image is proposed, using Shape from Shading{SFS) method. By using the 3D clew, gray information of single 2D image, 3D lunar surface is reconstructed by SFS theory fast, and is rendered by improved Hapke illumination model. It is proved that 3D lunar surface is preferably reconstructed and simulated by this method within permitted precision.%根据阴影恢复形状原理,提出一种基于单张影像的快速月面三维建模和光照模拟方法.利用单张2D影像中残留的3D线索——灰度信息,通过SFS方法对月面三维形貌进行快速重建,采用改进的Hapke光照模型对重建后的三维形貌进行渲染.实验结果证明,在精度允许范围内,该方法能快速地实现对月面三维形貌的提取和仿真.

  12. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  13. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    Science.gov (United States)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  14. ONLY IMAGE BASED FOR THE 3D METRIC SURVEY OF GOTHIC STRUCTURES BY USING FRAME CAMERAS AND PANORAMIC CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Pérez Ramos

    2016-06-01

    Full Text Available Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  15. 3D micro-particle image modeling and its application in measurement resolution investigation for visual sensing based axial localization in an optical microscope

    Science.gov (United States)

    Wang, Yuliang; Li, Xiaolai; Bi, Shusheng; Zhu, Xiaofeng; Liu, Jinhua

    2017-01-01

    Visual sensing based three dimensional (3D) particle localization in an optical microscope is important for both fundamental studies and practical applications. Compared with the lateral (X and Y) localization, it is more challenging to achieve a high resolution measurement of axial particle location. In this study, we aim to investigate the effect of different factors on axial measurement resolution through an analytical approach. Analytical models were developed to simulate 3D particle imaging in an optical microscope. A radius vector projection method was applied to convert the simulated particle images into radius vectors. With the obtained radius vectors, a term of axial changing rate was proposed to evaluate the measurement resolution of axial particle localization. Experiments were also conducted for comparison with that obtained through simulation. Moreover, with the proposed method, the effects of particle size on measurement resolution were discussed. The results show that the method provides an efficient approach to investigate the resolution of axial particle localization.

  16. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    Science.gov (United States)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  17. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  18. Glasses-free 3D viewing systems for medical imaging

    Science.gov (United States)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  19. 基于三维重建的绝缘子覆冰图像监测%Insulator icing monitoring based on 3D image reconstruction

    Institute of Scientific and Technical Information of China (English)

    杨浩; 吴畏

    2013-01-01

    提出了一种基于图像的三维重建的绝缘子覆冰在线监测方法.该方法运用计算机双目视觉技术,通过2台位置不同的摄像机采集2幅绝缘子覆冰图像,在对摄像机进行标定后,利用图像间的视差计算出覆冰绝缘子在三维空间的坐标,重建出覆冰绝缘子的三维模型,再由三维点云模型计算得到绝缘子覆冰的厚度及重量.应用所提出方法对多组实测图像进行三维重建和比较分析,结果表明该方法能够准确计算绝缘子覆冰的厚度和重量,还可反映绝缘子覆冰的厚度分布特征.%The online monitoring of insulator icing is proposed based on 3D image reconstruction applying the computer binocular vision technology. Two images of an ice-covered insulator are taken by two cameras in different locations. The coordinates of the insulator in 3D space are calculated by the parallax between two images after the calibration of cameras and its 3D model is then reconstructed. The thickness and weight of ice are calculated based on its point cloud model. 3D models are reconstructed for multiple real image pairs and the calculated results are compared with the manual measurements,which show that the calculation is highly precise and the distribution of icing thickness is got as well.

  20. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging.

    Science.gov (United States)

    Anas, Emran Mohammad Abu; Kim, Jae Gon; Lee, Soo Yeol; Hasan, Md Kamrul

    2011-10-07

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  1. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Anas, Emran Mohammad Abu; Hasan, Md Kamrul [Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka-1000 (Bangladesh); Kim, Jae Gon; Lee, Soo Yeol, E-mail: khasan@eee.buet.ac.b [Department of Biomedical Engineering, Kyung Hee University, Kyungki 446-701 (Korea, Republic of)

    2011-10-07

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  2. Review: Polymeric-Based 3D Printing for Tissue Engineering.

    Science.gov (United States)

    Wu, Geng-Hsi; Hsu, Shan-Hui

    Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.

  3. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  4. Identifying positioning-based attacks against 3D printed objects and the 3D printing process

    Science.gov (United States)

    Straub, Jeremy

    2017-05-01

    Zeltmann, et al. demonstrated that structural integrity and other quality damage to objects can be caused by changing its position on a 3D printer's build plate. On some printers, for example, object surfaces and support members may be stronger when oriented parallel to the X or Y axis. The challenge presented by the need to assure 3D printed object orientation is that this can be altered in numerous places throughout the system. This paper considers attack scenarios and discusses where attacks that change printing orientation can occur in the process. An imaging-based solution to combat this problem is presented.

  5. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Science.gov (United States)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  6. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  7. A new image reconstruction method for 3-D PET based upon pairs of near-missing lines of response

    Energy Technology Data Exchange (ETDEWEB)

    Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456-8611 (Japan) and Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474-8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644-0023 (Japan)

    2007-02-01

    We formerly introduced a new image reconstruction method for three-dimensional positron emission tomography, which is based upon pairs of near-missing lines of response. This method uses an elementary geometric property of lines of response, namely that two lines of response which originate from radioactive isotopes located within a sufficiently small voxel, will lie within a few millimeters of each other. The effectiveness of this method was verified by performing a simulation using GATE software and a digital Hoffman phantom.

  8. Metal-based nanorods as molecule-specific contrast agents for reflectance imaging in 3D tissues

    OpenAIRE

    Javier, David J.; Nitin, Nitin; Roblyer, Darren M.; Richards-Kortum, Rebecca

    2008-01-01

    Anisotropic metal-based nanomaterials have been proposed as potential contrast agents due to their strong surface plasmon resonance. We evaluated the contrast properties of gold, silver, and gold-silver hybrid nanorods for molecular imaging applications in three-dimensional biological samples. We used diffuse reflectance spectroscopy to predict the contrast properties of different types of nanorods embedded in biological model systems of increasing complexity. The predicted contrast propertie...

  9. Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis.

    Science.gov (United States)

    Liu, Jiamin; Udupa, Jayaram K; Saha, Punam K; Odhner, Dewey; Hirsch, Bruce E; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A

    2008-08-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%-97% and 0.2%-0.7%. The method requires 1-2 minutes of operator time and 6-7 min of computer time per data set, which makes it significantly more efficient than live wire-the method currently available for the task that can be used routinely.

  10. Image-Based 3d Reconstruction Data as AN Analysis and Documentation Tool for Architects: the Case of Plaka Bridge in Greece

    Science.gov (United States)

    Kouimtzoglou, T.; Stathopoulou, E. K.; Agrafiotis, P.; Georgopoulos, A.

    2017-02-01

    Μodern advances in the field of image-based 3D reconstruction of complex architectures are valuable tools that may offer the researchers great possibilities integrating the use of such procedures in their studies. In the same way that photogrammetry was a well-known useful tool among the cultural heritage community for years, the state of the art reconstruction techniques generate complete and easy to use 3D data, thus enabling engineers, architects and other cultural heritage experts to approach their case studies in an exhaustive and efficient way. The generated data can be a valuable and accurate basis upon which further plans and studies will be drafted. These and other aspects of the use of image-based 3D data for architectural studies are to be presented and analysed in this paper, based on the experience gained from a specific case study, the Plaka Bridge. This historic structure is of particular interest, as it was recently lost due to extreme weather conditions and serves as a strong proof that preventive actions are of utmost importance in order to preserve our common past.

  11. Large distance 3D imaging of hidden objects

    Science.gov (United States)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  12. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Science.gov (United States)

    Seniutinas, Gediminas; Balčytis, Armandas; Reklaitis, Ignas; Chen, Feng; Davis, Jeffrey; David, Christian; Juodkazis, Saulius

    2017-06-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1-100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics) within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  13. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Directory of Open Access Journals (Sweden)

    Seniutinas Gediminas

    2017-06-01

    Full Text Available The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM, focused ion beam (FIB milling/imaging, and atomic force microscopy (AFM. Fabrication and in situ imaging of materials undergoing a three-dimensional (3D nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  14. Vhrs Stereo Images for 3d Modelling of Buildings

    Science.gov (United States)

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  15. VHRS STEREO IMAGES FOR 3D MODELLING OF BUILDINGS

    Directory of Open Access Journals (Sweden)

    A. Bujakiewicz

    2012-07-01

    Full Text Available The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation – Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control pointsand amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  16. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  17. 3D reconstruction of concave surfaces using polarisation imaging

    Science.gov (United States)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  18. Detection and alignment of 3D domain swapping proteins using angle-distance image-based secondary structural matching techniques.

    Directory of Open Access Journals (Sweden)

    Chia-Han Chu

    Full Text Available This work presents a novel detection method for three-dimensional domain swapping (DS, a mechanism for forming protein quaternary structures that can be visualized as if monomers had "opened" their "closed" structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D image transformations of secondary structural elements (SSEs, specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its

  19. Detection and alignment of 3D domain swapping proteins using angle-distance image-based secondary structural matching techniques.

    Science.gov (United States)

    Chu, Chia-Han; Lo, Wei-Cheng; Wang, Hsin-Wei; Hsu, Yen-Chu; Hwang, Jenn-Kang; Lyu, Ping-Chiang; Pai, Tun-Wen; Tang, Chuan Yi

    2010-10-14

    This work presents a novel detection method for three-dimensional domain swapping (DS), a mechanism for forming protein quaternary structures that can be visualized as if monomers had "opened" their "closed" structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D) image transformations of secondary structural elements (SSEs), specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC) and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD) computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its hinge loop

  20. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

    Science.gov (United States)

    Zhang, Renyuan; Cao, Siyang

    2017-01-01

    In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper. PMID:28629140

  1. A Legendre orthogonal moment based 3D edge operator

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hui; SHU Huazhong; LUO Limin; J. L. Dillenseger

    2005-01-01

    This paper presents a new 3D edge operator based on Legendre orthogonal moments. This operator can be used to extract the edge of 3D object in any window size,with more accurate surface orientation and more precise surface location. It also has full geometry meaning. Process of calculation is considered in the moment based method.We can greatly speed up the computation by calculating out the masks in advance. We integrate this operator into our rendering of medical image data based on ray casting algorithm. Experimental results show that it is an effective 3D edge operator that is more accurate in position and orientation.

  2. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Sung; Park, Kwang Suk [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea); Seoul National University College of Medicine, Department of Biomedical Engineering, Seoul (Korea); Ahn, Soon-Hyun; Oh, Seung Ha; Kim, Chong Sun; Chung, June-Key; Lee, Myung Chul [Seoul National University College of Medicine, Department of Otolaryngology, Head and Neck Surgery, Seoul (Korea); Lee, Dong Soo; Jeong, Jae Min [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea)

    2005-06-01

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1-{sup 14}C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in

  3. From medical imaging data to 3D printed anatomical models.

    Science.gov (United States)

    Bücking, Thore M; Hill, Emma R; Robertson, James L; Maneas, Efthymios; Plumb, Andrew A; Nikitichev, Daniil I

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  4. Morphometrics, 3D Imaging, and Craniofacial Development

    Science.gov (United States)

    Hallgrimsson, Benedikt; Percival, Christopher J.; Green, Rebecca; Young, Nathan M.; Mio, Washington; Marcucio, Ralph

    2017-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  5. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    Science.gov (United States)

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  6. Interactive 2D to 3D stereoscopic image synthesis

    Science.gov (United States)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  7. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  8. Phenotypic transition maps of 3D breast acini obtained by imaging-guided agent-based modeling

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Jonathan; Enderling, Heiko; Becker-Weimann, Sabine; Pham, Christopher; Polyzos, Aris; Chen, Chen-Yi; Costes, Sylvain V

    2011-02-18

    We introduce an agent-based model of epithelial cell morphogenesis to explore the complex interplay between apoptosis, proliferation, and polarization. By varying the activity levels of these mechanisms we derived phenotypic transition maps of normal and aberrant morphogenesis. These maps identify homeostatic ranges and morphologic stability conditions. The agent-based model was parameterized and validated using novel high-content image analysis of mammary acini morphogenesis in vitro with focus on time-dependent cell densities, proliferation and death rates, as well as acini morphologies. Model simulations reveal apoptosis being necessary and sufficient for initiating lumen formation, but cell polarization being the pivotal mechanism for maintaining physiological epithelium morphology and acini sphericity. Furthermore, simulations highlight that acinus growth arrest in normal acini can be achieved by controlling the fraction of proliferating cells. Interestingly, our simulations reveal a synergism between polarization and apoptosis in enhancing growth arrest. After validating the model with experimental data from a normal human breast line (MCF10A), the system was challenged to predict the growth of MCF10A where AKT-1 was overexpressed, leading to reduced apoptosis. As previously reported, this led to non growth-arrested acini, with very large sizes and partially filled lumen. However, surprisingly, image analysis revealed a much lower nuclear density than observed for normal acini. The growth kinetics indicates that these acini grew faster than the cells comprising it. The in silico model could not replicate this behavior, contradicting the classic paradigm that ductal carcinoma in situ is only the result of high proliferation and low apoptosis. Our simulations suggest that overexpression of AKT-1 must also perturb cell-cell and cell-ECM communication, reminding us that extracellular context can dictate cellular behavior.

  9. Multi-level spherical moments based 3D model retrieval

    Institute of Scientific and Technical Information of China (English)

    LIU Wei; HE Yuan-jun

    2006-01-01

    In this paper a novel 3D model retrieval method that employs multi-level spherical moment analysis and relies on voxelization and spherical mapping of the 3D models is proposed. For a given polygon-soup 3D model, first a pose normalization step is done to align the model into a canonical coordinate frame so as to define the shape representation with respect to this orientation. Afterward we rasterize its exterior surface into cubical voxel grids, then a series of homocentric spheres with their center superposing the center of the voxel grids cut the voxel grids into several spherical images. Finally moments belonging to each sphere are computed and the moments of all spheres constitute the descriptor of the model. Experiments showed that Euclidean distance based on this kind of feature vector can distinguish different 3D models well and that the 3D model retrieval system based on this arithmetic yields satisfactory performance.

  10. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  11. Image quality assessment of LaBr{sub 3}-based whole-body 3D PET scanners: a Monte Carlo evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Surti, S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Karp, J S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Muehllehner, G [Philips Medical Systems, Philadelphia, PA 19104 (United States)

    2004-10-07

    The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr{sub 3} detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr{sub 3} has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr{sub 3} without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr{sub 3} are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr{sub 3} scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr{sub 3} scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels.

  12. 3D Objects Reconstruction from Image Data

    OpenAIRE

    Cír, Filip

    2008-01-01

    Tato práce se zabývá 3D rekonstrukcí z obrazových dat. Jsou popsány možnosti a přístupy k optickému skenování. Ruční optický 3D skener se skládá z kamery a zdroje čárového laseru, který je vzhledem ke kameře upevněn pod určitým úhlem. Je navržena vhodná podložka se značkami a je popsán algoritmus pro jejich real-time detekci. Po detekci značek lze vypočítat pozici a orientaci kamery. Na závěr je popsána detekce laseru a postup při výpočtu bodů na povrchu objektu pomocí triangulace. This pa...

  13. Optimal Point Spread Function Design for 3D Imaging

    Science.gov (United States)

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2015-01-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and super-resolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem – finding the pupil-plane phase pattern that would yield a PSF with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 μm depth of field, and another with an unprecedented 5 μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  14. Light field display and 3D image reconstruction

    Science.gov (United States)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  15. Dynamic contrast-enhanced 3D photoacoustic imaging

    Science.gov (United States)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  16. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  17. 3D Imaging with Structured Illumination for Advanced Security Applications

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  18. Medically inoperable endometrial cancer in patients with a high body mass index (BMI): Patterns of failure after 3-D image-based high dose rate (HDR) brachytherapy

    DEFF Research Database (Denmark)

    Acharya, Sahaja; Esthappan, Jacqueline; Badiyan, Shahed

    2016-01-01

    BACKGROUND AND PURPOSE: High BMI is a reason for medical inoperability in patients with endometrial cancer in the United States. Definitive radiation is an alternative therapy for these patients; however, data on patterns of failure after definitive radiotherapy are lacking. We describe...... the patterns of failure after definitive treatment with 3-D image-based high dose rate (HDR) brachytherapy for medically inoperable endometrial cancer. MATERIALS AND METHODS: Forty-three consecutive patients with endometrial cancer FIGO stages I-III were treated definitively with HDR brachytherapy...

  19. Light-field camera-based 3D volumetric particle image velocimetry with dense ray tracing reconstruction technique

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio

    2017-07-01

    This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.

  20. 3D passive integral imaging using compressive sensing.

    Science.gov (United States)

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

    2012-11-19

    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  1. Inner and outer coronary vessel wall segmentation from CCTA using an active contour model with machine learning-based 3D voxel context-aware image force

    Science.gov (United States)

    Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.

    2016-03-01

    In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).

  2. Accuracy of 3D cartilage models generated from MR images is dependent on cartilage thickness: laser scanner based validation of in vivo cartilage.

    Science.gov (United States)

    Koo, Seungbum; Giori, Nicholas J; Gold, Garry E; Dyrby, Chris O; Andriacchi, Thomas P

    2009-12-01

    Cartilage morphology change is an important biomarker for the progression of osteoarthritis. The purpose of this study was to assess the accuracy of in vivo cartilage thickness measurements from MR image-based 3D cartilage models using a laser scanning method and to test if the accuracy changes with cartilage thickness. Three-dimensional tibial cartilage models were created from MR images (in-plane resolution of 0.55 mm and thickness of 1.5 mm) of osteoarthritic knees of ten patients prior to total knee replacement surgery using a semi-automated B-spline segmentation algorithm. Following surgery, the resected tibial plateaus were laser scanned and made into 3D models. The MR image and laser-scan based models were registered to each other using a shape matching technique. The thicknesses were compared point wise for the overall surface. The linear mixed-effects model was used for statistical test. On average, taking account of individual variations, the thickness measurements in MRI were overestimated in thinner (<2.5 mm) regions. The cartilage thicker than 2.5 mm was accurately predicted in MRI, though the thick cartilage in the central regions was underestimated. The accuracy of thickness measurements in the MRI-derived cartilage models systemically varied according to native cartilage thickness.

  3. A novel 3D graph cut based co-segmentation of lung tumor on PET-CT images with Gaussian mixture models

    Science.gov (United States)

    Yu, Kai; Chen, Xinjian; Shi, Fei; Zhu, Weifang; Zhang, Bin; Xiang, Dehui

    2016-03-01

    Positron Emission Tomography (PET) and Computed Tomography (CT) have been widely used in clinical practice for radiation therapy. Most existing methods only used one image modality, either PET or CT, which suffers from the low spatial resolution in PET or low contrast in CT. In this paper, a novel 3D graph cut method is proposed, which integrated Gaussian Mixture Models (GMMs) into the graph cut method. We also employed the random walk method as an initialization step to provide object seeds for the improvement of the graph cut based segmentation on PET and CT images. The constructed graph consists of two sub-graphs and a special link between the sub-graphs which penalize the difference segmentation between the two modalities. Finally, the segmentation problem is solved by the max-flow/min-cut method. The proposed method was tested on 20 patients' PET-CT images, and the experimental results demonstrated the accuracy and efficiency of the proposed algorithm.

  4. 3D Beam Reconstruction by Fluorescence Imaging

    CERN Document Server

    Radwell, Neal; Franke-Arnold, Sonja

    2013-01-01

    We present a technique for mapping the complete 3D spatial intensity profile of a laser beam from its fluorescence in an atomic vapour. We propagate shaped light through a rubidium vapour cell and record the resonant scattering from the side. From a single measurement we obtain a camera limited resolution of 200 x 200 transverse points and 659 longitudinal points. In constrast to invasive methods in which the camera is placed in the beam path, our method is capable of measuring patterns formed by counterpropagating laser beams. It has high resolution in all 3 dimensions, is fast and can be completely automated. The technique has applications in areas which require complex beam shapes, such as optical tweezers, atom trapping and pattern formation.

  5. De la manipulation des images 3D

    Directory of Open Access Journals (Sweden)

    Geneviève Pinçon

    2012-04-01

    Full Text Available Si les technologies 3D livrent un enregistrement précis et pertinent des graphismes pariétaux, elles offrent également des applications particulièrement intéressantes pour leur analyse. À travers des traitements sur nuage de points et des simulations, elles autorisent un large éventail de manipulations touchant autant à l’observation qu’à l’étude des œuvres pariétales. Elles permettent notamment une perception affinée de leur volumétrie, et deviennent des outils de comparaison de formes très utiles dans la reconstruction des chronologies pariétales et dans l’appréhension des analogies entre différents sites. Ces outils analytiques sont ici illustrés par les travaux originaux menés sur les sculptures pariétales des abris du Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne et de la Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente.If 3D technologies allow an accurate and relevant recording of rock art, they also offer several interesting applications for its analysis. Through spots clouds treatments and simulations, they permit a wide range of manipulations concerning figurations observation and study. Especially, they allow a fine perception of their volumetry. They become efficient tools for forms comparisons, very useful in the reconstruction of graphic ensemble chronologies and for inter-sites analogies. These analytical tools are illustrated by the original works done on the sculptures of Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne and Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente rock-shelters.

  6. Strain-rate sensitivity of foam materials: A numerical study using 3D image-based finite element model

    Science.gov (United States)

    Sun, Yongle; Li, Q. M.; Withers, P. J.

    2015-09-01

    Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band) of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi) may not be sufficiently representative to capture the actual structural effect. To overcome these limitations, the strain-rate sensitivity of the compressive and tensile properties of closed-cell aluminium Alporas foam is investigated in this study by means of meso-scale realistic finite element (FE) simulations. The FE modelling method based on X-ray computed tomography (CT) image is introduced first, as well as its applications to foam materials. Then the compression and tension of Alporas foam at a wide variety of applied nominal strain-rates are simulated using FE model constructed from the actual cell geometry obtained from the CT image. The stain-rate sensitivity of compressive strength (collapse stress) and tensile strength (0.2% offset yield point) are evaluated when considering different cell-wall material properties. The numerical results show that the rate dependence of cell-wall material is the main cause of the strain-rate hardening of the compressive and tensile strengths at low and intermediate strain-rates. When the strain-rate is sufficiently high, shock compression is initiated, which significantly enhances the stress at the loading end and has complicated effect on the stress at the supporting end. The plastic tensile wave effect is evident at high strain-rates, but shock tension cannot develop in Alporas foam due to the softening associated with single fracture process zone occurring in tensile response. In all cases the micro inertia of individual cell walls subjected to localised deformation is found to

  7. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant'Omobono and Sarno project's strategies

    Science.gov (United States)

    Gianolio, S.; Mermati, F.; Genovese, G.

    2014-06-01

    This paper presents a "standard" method that is being developed by ARESlab of Rome's La Sapienza University for the documentation and the representation of the archaeological artifacts and structures through automatic photogrammetry software. The image-based 3D modeling technique was applied in two projects: in Sarno and in Rome. The first is a small city in Campania region along Via Popilia, known as the ancient way from Capua to Rhegion. The interest in this city is based on the recovery of over 2100 tombs from local necropolis that contained more than 100.000 artifacts collected in "Museo Nazionale Archeologico della Valle del Sarno". In Rome the project regards the archaeological area of Insula Volusiana placed in Forum Boarium close to Sant'Omobono sacred area. During the studies photographs were taken by Canon EOS 5D Mark II and Canon EOS 600D cameras. 3D model and meshes were created in Photoscan software. The TOF-CW Z+F IMAGER® 5006h laser scanner is used to dense data collection of archaeological area of Rome and to make a metric comparison between range-based and image-based techniques. In these projects the IBM as a low-cost technique proved to be a high accuracy improvement if planned correctly and it shown also how it helps to obtain a relief of complex strata and architectures compared to traditional manual documentation methods (e.g. two-dimensional drawings). The multidimensional recording can be used for future studies of the archaeological heritage, especially for the "destructive" character of an excavation. The presented methodology is suitable for the 3D registration and the accuracy of the methodology improved also the scientific value.

  8. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  9. Dynamic 3D computed tomography scanner for vascular imaging

    Science.gov (United States)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  10. Joint calibration of 3D resist image and CDSEM

    Science.gov (United States)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  11. Calibration of Images with 3D range scanner data

    OpenAIRE

    Adalid López, Víctor Javier

    2009-01-01

    Projecte fet en col.laboració amb EPFL 3D laser range scanners are used in extraction of the 3D data in a scene. Main application areas are architecture, archeology and city planning. Thought the raw scanner data has a gray scale values, the 3D data can be merged with colour camera image values to get textured 3D model of the scene. Also these devices are able to take a reliable copy in 3D form objects, with a high level of accuracy. Therefore, they scanned scenes can be use...

  12. 3D Ground Penetrating Imaging Radar

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    GPiR (ground-penetrating imaging radar) is a new technology for mapping the shallow subsurface, including society’s underground infrastructure. Applications for this technology include efficient and precise mapping of buried utilities on a large scale.

  13. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  14. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    Science.gov (United States)

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  15. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  16. Surgeon-Based 3D Printing for Microvascular Bone Flaps.

    Science.gov (United States)

    Taylor, Erin M; Iorio, Matthew L

    2017-07-01

    Background Three-dimensional (3D) printing has developed as a revolutionary technology with the capacity to design accurate physical models in preoperative planning. We present our experience in surgeon-based design of 3D models, using home 3D software and printing technology for use as an adjunct in vascularized bone transfer. Methods Home 3D printing techniques were used in the design and execution of vascularized bone flap transfers to the upper extremity. Open source imaging software was used to convert preoperative computed tomography scans and create 3D models. These were printed in the surgeon's office as 3D models for the planned reconstruction. Vascularized bone flaps were designed intraoperatively based on the 3D printed models. Results Three-dimensional models were created for intraoperative use in vascularized bone flaps, including (1) medial femoral trochlea (MFT) flap for scaphoid avascular necrosis and nonunion, (2) MFT flap for lunate avascular necrosis and nonunion, (3) medial femoral condyle (MFC) flap for wrist arthrodesis, and (4) free fibula osteocutaneous flap for distal radius septic nonunion. Templates based on the 3D models allowed for the precise and rapid contouring of well-vascularized bone flaps in situ, prior to ligating the donor pedicle. Conclusions Surgeon-based 3D printing is a feasible, innovative technology that allows for the precise and rapid contouring of models that can be created in various configurations for pre- and intraoperative planning. The technology is easy to use, convenient, and highly economical as compared with traditional send-out manufacturing. Surgeon-based 3D printing is a useful adjunct in vascularized bone transfer. Level of Evidence Level IV. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Visualizing Vertebrate Embryos with Episcopic 3D Imaging Techniques

    Directory of Open Access Journals (Sweden)

    Stefan H. Geyer

    2009-01-01

    Full Text Available The creation of highly detailed, three-dimensional (3D computer models is essential in order to understand the evolution and development of vertebrate embryos, and the pathogenesis of hereditary diseases. A still-increasing number of methods allow for generating digital volume data sets as the basis of virtual 3D computer models. This work aims to provide a brief overview about modern volume data–generation techniques, focusing on episcopic 3D imaging methods. The technical principles, advantages, and problems of episcopic 3D imaging are described. The strengths and weaknesses in its ability to visualize embryo anatomy and labeled gene product patterns, specifically, are discussed.

  18. Application of 3D Morphable Models to faces in video images

    NARCIS (Netherlands)

    van Rootseler, R.T.A.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; van den Biggelaar, Olivier

    2011-01-01

    The 3D Morphable Face Model (3DMM) has been used for over a decade for creating 3D models from single images of faces. This model is based on a PCA model of the 3D shape and texture generated from a limited number of 3D scans. The goal of fitting a 3DMM to an image is to find the model coefficients,

  19. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition

    Directory of Open Access Journals (Sweden)

    Abdulla Al-Rawabdeh

    2016-01-01

    Full Text Available Landslides often cause economic losses, property damage, and loss of lives. Monitoring landslides using high spatial and temporal resolution imagery and the ability to quickly identify landslide regions are the basis for emergency disaster management. This study presents a comprehensive system that uses unmanned aerial vehicles (UAVs and Semi-Global dense Matching (SGM techniques to identify and extract landslide scarp data. The selected study area is located along a major highway in a mountainous region in Jordan, and contains creeping landslides induced by heavy rainfall. Field observations across the slope body and a deformation analysis along the highway and existing gabions indicate that the slope is active and that scarp features across the slope will continue to open and develop new tension crack features, leading to the downward movement of rocks. The identification of landslide scarps in this study was performed via a dense 3D point cloud of topographic information generated from high-resolution images captured using a low-cost UAV and a target-based camera calibration procedure for a low-cost large-field-of-view camera. An automated approach was used to accurately detect and extract the landslide head scarps based on geomorphological factors: the ratio of normalized Eigenvalues (i.e., λ1/λ2 ≥ λ3 derived using principal component analysis, topographic surface roughness index values, and local-neighborhood slope measurements from the 3D image-based point cloud. Validation of the results was performed using root mean square error analysis and a confusion (error matrix between manually digitized landslide scarps and the automated approaches. The experimental results using the fully automated 3D point-based analysis algorithms show that these approaches can effectively distinguish landslide scarps. The proposed algorithms can accurately identify and extract landslide scarps with centimeter-scale accuracy. In addition, the combination

  20. Image-Based 3D Treatment Planning for Vaginal Cylinder Brachytherapy: Dosimetric Effects of Bladder Filling on Organs at Risk

    Energy Technology Data Exchange (ETDEWEB)

    Hung, Jennifer; Shen Sui; De Los Santos, Jennifer F. [Department of Radiation Oncology, University of Alabama Medical Center, Birmingham, AL (United States); Kim, Robert Y., E-mail: rkim@uabmc.edu [Department of Radiation Oncology, University of Alabama Medical Center, Birmingham, AL (United States)

    2012-07-01

    Purpose: To investigate the dosimetric effects of bladder filling on organs at risk (OARs) using three-dimensional image-based treatment planning for vaginal cylinder brachytherapy. Methods and Materials: Twelve patients with endometrial or cervical cancer underwent postoperative high-dose rate vaginal cylinder brachytherapy. For three-dimensional planning, patients were simulated by computed tomography with an indwelling catheter in place (empty bladder) and with 180 mL of sterile water instilled into the bladder (full bladder). The bladder, rectum, sigmoid, and small bowel (OARs) were contoured, and a prescription dose was generated for 10 to 35 Gy in 2 to 5 fractions at the surface or at 5 mm depth. For each OAR, the volume dose was defined by use of two different criteria: the minimum dose value in a 2.0-cc volume receiving the highest dose (D{sub 2cc}) and the dose received by 50% of the OAR volume (D{sub 50%}). International Commission on Radiation Units and Measurements (ICRU) bladder and rectum point doses were calculated for comparison. The cylinder-to-bowel distance was measured using the shortest distance from the cylinder apex to the contoured sigmoid or small bowel. Statistical analyses were performed with paired t tests. Results: Mean bladder and rectum D{sub 2cc} values were lower than their respective ICRU doses. However, differences between D{sub 2cc} and ICRU doses were small. Empty vs. full bladder did not significantly affect the mean cylinder-to-bowel distance (0.72 vs. 0.92 cm, p = 0.08). In contrast, bladder distention had appreciable effects on bladder and small bowel volume dosimetry. With a full bladder, the mean small bowel D{sub 2cc} significantly decreased from 677 to 408 cGy (p = 0.004); the mean bladder D{sub 2cc} did not increase significantly (1,179 cGy vs. 1,246 cGy, p = 0.11). Bladder distention decreased the mean D{sub 50%} for both the bladder (441 vs. 279 cGy, p = 0.001) and the small bowel (168 vs. 132 cGy, p = 0.001). Rectum

  1. Intensity-based registration of freehand 3D ultrasound and CT-scan images of the kidney

    Energy Technology Data Exchange (ETDEWEB)

    Leroy, Antoine; Mozer, Pierre; Payan, Yohan; Troccaz, Jocelyne [TIMC Lab - IN3S, Faculte de Medecine, La Tronche cedex (France)

    2007-06-15

    Objectives This paper presents a method to register a pre-operative computed-tomography (CT) volume to a sparse set of intra-operative ultra-sound (US) slices. In the context of percutaneous renal puncture, the aim is to transfer planning information to an intra-operative coordinate system. Materials and methods The spatial position of the US slices is measured by optically localizing a calibrated probe. Assuming the reproducibility of kidney motion during breathing, and no deformation of the organ, the method consists in optimizing a rigid 6 degree of freedom transform by evaluating at each step the similarity between the set of US images and the CT volume. The correlation between CT and US images being naturally rather poor, the images were preprocessed in order to increase their similarity. Among the similarity measures formerly studied in the context of medical image registration, correlation ratio turned out to be one of the most accurate and appropriate, particularly with the chosen non-derivative minimization scheme, namely Powell-Brent's. The resulting matching transforms are compared to a standard rigid surface registration involving segmentation, regarding both accuracy and repeatability. Results The obtained results are presented and discussed. (orig.)

  2. Intensity-Based Registration of Freehand 3D Ultrasound and CT-scan Images of the Kidney

    CERN Document Server

    Leroy, Antoine; Payan, Yohan; Troccaz, Jocelyne

    2007-01-01

    This paper presents a method to register a pre-operative Computed-Tomography (CT) volume to a sparse set of intra-operative Ultra-Sound (US) slices. In the context of percutaneous renal puncture, the aim is to transfer planning information to an intra-operative coordinate system. The spatial position of the US slices is measured by optically localizing a calibrated probe. Assuming the reproducibility of kidney motion during breathing, and no deformation of the organ, the method consists in optimizing a rigid 6 Degree Of Freedom (DOF) transform by evaluating at each step the similarity between the set of US images and the CT volume. The correlation between CT and US images being naturally rather poor, the images have been preprocessed in order to increase their similarity. Among the similarity measures formerly studied in the context of medical image registration, Correlation Ratio (CR) turned out to be one of the most accurate and appropriate, particularly with the chosen non-derivative minimization scheme, n...

  3. Effective classification of 3D image data using partitioning methods

    Science.gov (United States)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  4. Acoustic 3D imaging of dental structures

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  5. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  6. Development and evaluation of a LOR-based image reconstruction with 3D system response modeling for a PET insert with dual-layer offset crystal design

    Science.gov (United States)

    Zhang, Xuezhu; Stortz, Greg; Sossi, Vesna; Thompson, Christopher J.; Retière, Fabrice; Kozlowski, Piotr; Thiessen, Jonathan D.; Goertzen, Andrew L.

    2013-12-01

    In this study we present a method of 3D system response calculation for analytical computer simulation and statistical image reconstruction for a magnetic resonance imaging (MRI) compatible positron emission tomography (PET) insert system that uses a dual-layer offset (DLO) crystal design. The general analytical system response functions (SRFs) for detector geometric and inter-crystal penetration of coincident crystal pairs are derived first. We implemented a 3D ray-tracing algorithm with 4π sampling for calculating the SRFs of coincident pairs of individual DLO crystals. The determination of which detector blocks are intersected by a gamma ray is made by calculating the intersection of the ray with virtual cylinders with radii just inside the inner surface and just outside the outer-edge of each crystal layer of the detector ring. For efficient ray-tracing computation, the detector block and ray to be traced are then rotated so that the crystals are aligned along the X-axis, facilitating calculation of ray/crystal boundary intersection points. This algorithm can be applied to any system geometry using either single-layer (SL) or multi-layer array design with or without offset crystals. For effective data organization, a direct lines of response (LOR)-based indexed histogram-mode method is also presented in this work. SRF calculation is performed on-the-fly in both forward and back projection procedures during each iteration of image reconstruction, with acceleration through use of eight-fold geometric symmetry and multi-threaded parallel computation. To validate the proposed methods, we performed a series of analytical and Monte Carlo computer simulations for different system geometry and detector designs. The full-width-at-half-maximum of the numerical SRFs in both radial and tangential directions are calculated and compared for various system designs. By inspecting the sinograms obtained for different detector geometries, it can be seen that the DLO crystal

  7. Validation tests of open-source procedures for digital camera calibration and 3D image-based modelling

    OpenAIRE

    I. Toschi; Rivola, R.; Bertacchini, E; Castagnetti, C.; M. Dubbini; Capra, A.

    2013-01-01

    Among the many open-source software solutions recently developed for the extraction of point clouds from a set of un-oriented images, the photogrammetric tools Apero and MicMac (IGN, Institut Géographique National) aim to distinguish themselves by focusing on the accuracy and the metric content of the final result. This paper firstly aims at assessing the accuracy of the simplified and automated calibration procedure offered by the IGN tools. Results obtained with this procedure were...

  8. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  9. Self-calibration of cone-beam CT geometry using 3D-2D image registration: development and application to tasked-based imaging with a robotic C-arm

    Science.gov (United States)

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods: Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting "self-calibration" was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results: The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard ("true") calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the "self" and "true" calibration methods were on the order of 10-3 mm-1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion: The proposed geometric "self" calibration provides a means for 3D imaging on general noncircular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced "task-based" 3D imaging methods now in development for robotic C-arms.

  10. SIFT algorithm-based 3D pose estimation of femur.

    Science.gov (United States)

    Zhang, Xuehe; Zhu, Yanhe; Li, Changle; Zhao, Jie; Li, Ge

    2014-01-01

    To address the lack of 3D space information in the digital radiography of a patient femur, a pose estimation method based on 2D-3D rigid registration is proposed in this study. The method uses two digital radiography images to realize the preoperative 3D visualization of a fractured femur. Compared with the pure Digital Radiography or Computed Tomography imaging diagnostic methods, the proposed method has the advantages of low cost, high precision, and minimal harmful radiation. First, stable matching point pairs in the frontal and lateral images of the patient femur and the universal femur are obtained by using the Scale Invariant Feature Transform method. Then, the 3D pose estimation registration parameters of the femur are calculated by using the Iterative Closest Point (ICP) algorithm. Finally, based on the deviation between the six degrees freedom parameter calculated by the proposed method, preset posture parameters are calculated to evaluate registration accuracy. After registration, the rotation error is less than l.5°, and the translation error is less than 1.2 mm, which indicate that the proposed method has high precision and robustness. The proposed method provides 3D image information for effective preoperative orthopedic diagnosis and surgery planning.

  11. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    Science.gov (United States)

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  12. Imaging articular cartilage defects with 3D fat-suppressed echo planar imaging: comparison with conventional 3D fat-suppressed gradient echo sequence and correlation with histology.

    Science.gov (United States)

    Trattnig, S; Huber, M; Breitenseher, M J; Trnka, H J; Rand, T; Kaider, A; Helbich, T; Imhof, H; Resnick, D

    1998-01-01

    Our goal was to shorten examination time in articular cartilage imaging by use of a recently developed 3D multishot echo planar imaging (EPI) sequence with fat suppression (FS). We performed comparisons with 3D FS GE sequence using histology as the standard of reference. Twenty patients with severe gonarthrosis who were scheduled for total knee replacement underwent MRI prior to surgery. Hyaline cartilage was imaged with a 3D FS EPI and a 3D FS GE sequence. Signal intensities of articular structures were measured, and contrast-to-noise (C/N) ratios were calculated. Each knee was subdivided into 10 cartilage surfaces. From a total of 188 (3D EPI sequence) and 198 (3D GE sequence) cartilage surfaces, 73 and 79 histologic specimens could be obtained and analyzed. MR grading of cartilage lesions on both sequences was based on a five grade classification scheme and compared with histologic grading. The 3D FS EPI sequence provided a high C/N ratio between cartilage and subchondral bone similar to that of the 3D FS GE sequence. The C/N ratio between cartilage and effusion was significantly lower on the 3D EPI sequence due to higher signal intensity of fluid. MR grading of cartilage abnormalities using 3D FS EPI and 3D GE sequence correlated well with histologic grading. 3D FS EPI sequence agreed within one grade in 69 of 73 (94.5%) histologically proven cartilage lesions and 3D FS GE sequence agreed within one grade in 76 of 79 (96.2%) lesions. The gradings were identical in 38 of 73 (52.1%) and in 46 of 79 (58.3%) cases, respectively. The difference between the sensitivities was statistically not significant. The 3D FS EPI sequence is comparable with the 3D FS GE sequence in the noninvasive evaluation of advanced cartilage abnormalities but reduces scan time by a factor of 4.

  13. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    Science.gov (United States)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  14. 3D imaging of hematoxylin and eosin stained thick tissues with a sub-femtoliter resolution by using Cr:forsterite-laser-based nonlinear microscopy (Conference Presentation)

    Science.gov (United States)

    Kao, Chien-Ting; Wei, Ming-Liang; Liao, Yi-Hua; Sun, Chi-Kuang

    2017-02-01

    Intraoperative assessment of excision tissues during cancer surgery is clinically important. The assessment is used to be guided by the examination for residual tumor with frozen pathology, while it is time consuming for preparation and is with low accuracy for diagnosis. Recently, reflection confocal microscopy (RCM) and nonlinear microscopy (NLM) were demonstrated to be promising methods for surgical border assessment. Intraoperative RCM imaging may enable detection of residual tumor directly on skin cancers patients during Mohs surgery. The assessment of benign and malignant breast pathologies in fresh surgical specimens was demonstrated by NLM. Without using hematoxylin and eosin (H and E) that are common dyes for histopathological diagnosis, RCM was proposed to image in vivo by using aluminum chloride for nuclear contrast on surgical wounds directly, while NLM was proposed to detect two photon fluorescence nuclear contrast from acrdine orange staining. In this paper, we propose and demonstrate 3D imaging of H and E stained thick tissues with a sub-femtoliter resolution by using Cr:forsterite-laser-based NLM. With a 1260 nm femtosecond Cr:forsterite laser as the excitation source, the hematoxylin will strongly enhance the third-harmonic generation (THG) signals, while eosin will illuminate strong fluorescence under three photon absorption. Compared with previous works, the 1260 nm excitation light provide high penetration and low photodamage to the exercised tissues so that the possibility to perform other follow-up examination will be preserved. The THG and three-photon process provides high nonlinearity so that the super resolution in 3D is now possible. The staining and the contrast of the imaging is also fully compatible with the current clinical standard on frozen pathology thus facilitate the rapid intraoperative assessment of excision tissues. This work is sponsored by National Health Research Institutes and supported by National Taiwan University

  15. A 3D algorithm based on the combined inversion of Rayleigh and Love waves for imaging and monitoring of shallow structures

    Science.gov (United States)

    Pilz, Marco; Parolai, Stefano; Woith, Heiko

    2017-01-01

    SUMMARYIn recent years there has been increasing interest in the study of seismic noise interferometry as it can provide a complementary approach to active source or earthquake based methods for imaging and continuous monitoring the shallow structure of the Earth. This meaningful information is extracted from wavefields propagating between those receiver positions at which seismic noise was recorded. Until recently, noise-based imaging relied mostly on Rayleigh waves. However, considering similar wavelengths, a combined use of Rayleigh and Love wave tomography can succeed in retrieving velocity heterogeneities at depth due to their different sensitivity kernels. Here we present a novel one-step algorithm for simultaneously inverting Rayleigh and Love wave dispersion data aiming at identifying and describing complex 3D velocity structures. The algorithm may help to accurately and efficiently map the shear-wave velocities and the Poisson ratio of the surficial soil layers. In the high-frequency range, the scattered part of the correlation functions stabilizes sufficiently fast to provide a reliable estimate of the velocity structure not only for imaging purposes but also allows for changes in the medium properties to be monitored. Such monitoring can be achieved with a high spatial resolution in 3D and with a time resolution as small as a few hours. In this article, we describe a recent array experiment in a volcanic environment in Solfatara (Italy) and we show that this novel approach has identified strong velocity variations at the interface between liquids and gas-dominated reservoirs, allowing localizing a region which is highly dynamic due to the interaction between the deep convection and its surroundings.

  16. 3D measurement system based on computer-generated gratings

    Science.gov (United States)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  17. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    NARCIS (Netherlands)

    Tran, Thanh N; Nguyen, Thanh T; Willemsz, Tofan A; van Kessel, Gijs; Frijlink, Henderik W; van der Voort Maarschalk, Kees

    2012-01-01

    Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of

  18. SHINKEI - a novel 3D isotropic MR neurography technique: technical advantages over 3DIRTSE-based imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kasper, Jared M.; Wadhwa, Vibhor; Xi, Yin [University of Texas Southwestern Medical Center, Musculoskeletal Radiology, Dallas, TX (United States); Scott, Kelly M. [University of Texas Southwestern Medical Center, Physical Medicine and Rehabilitation, Dallas, TX (United States); Rozen, Shai [University of Texas Southwestern Medical Center, Plastic Surgery, Dallas, TX (United States); Chhabra, Avneesh [University of Texas Southwestern Medical Center, Musculoskeletal Radiology, Dallas, TX (United States); Johns Hopkins University, Baltimore, MD (United States)

    2015-06-01

    Technical assessment of SHINKEI pulse sequence and conventional 3DIRTSE for LS plexus MR neurography. Twenty-one MR neurography examinations of the LS plexus were performed at 3 T, using 1.5-mm isotropic 3DIRTSE and SHINKEI sequences. Images were evaluated for motion and pulsation artefacts, nerve signal-to-noise ratio, contrast-to-noise ratio, nerve-to-fat ratio, muscle-to-fat ratio, fat suppression homogeneity and depiction of LS plexus branches. Paired Student t test was used to assess differences in nerve conspicuity (p < 0.05 was considered statistically significant). ICC correlation was obtained for intraobserver performance. Four examinations were excluded due to prior spine surgery. Bowel motion artefacts, pulsation artefacts, heterogeneous fat saturation and patient motion were seen in 16/17, 0/17, 17/17, 2/17 on 3DIRTSE and 0/17, 0/17, 0/17, 1/17 on SHINKEI. SHINKEI performed better (p < 0.01) for nerve signal-to-noise, contrast-to-noise, nerve-to-fat and muscle-to-fat ratios. 3DIRTSE and SHINKEI showed all LS plexus nerve roots, sciatic and femoral nerves. Smaller branches including obturator, lateral femoral cutaneous and iliohypogastric nerves were seen in 10/17, 5/17, 1/17 on 3DIRTSE and 17/17, 16/17, 7/17 on SHINKEI. Intraobserver reliability was excellent. SHINKEI MRN demonstrates homogeneous and superior fat suppression with increased nerve signal- and contrast-to-noise ratios resulting in better conspicuity of smaller LS plexus branches. (orig.)

  19. 3D reconstruction, visualization, and measurement of MRI images

    Science.gov (United States)

    Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap

    1999-03-01

    This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.

  20. Parallel computing helps 3D depth imaging, processing

    Energy Technology Data Exchange (ETDEWEB)

    Nestvold, E. O. [IBM, Houston, TX (United States); Su, C. B. [IBM, Dallas, TX (United States); Black, J. L. [Landmark Graphics, Denver, CO (United States); Jack, I. G. [BP Exploration, London (United Kingdom)

    1996-10-28

    The significance of 3D seismic data in the petroleum industry during the past decade cannot be overstated. Having started as a technology too expensive to be utilized except by major oil companies, 3D technology is now routinely used by independent operators in the US and Canada. As with all emerging technologies, documentation of successes has been limited. There are some successes, however, that have been summarized in the literature in the recent past. Key technological developments contributing to this success have been major advances in RISC workstation technology, 3D depth imaging, and parallel computing. This article presents the basic concepts of parallel seismic computing, showing how it impacts both 3D depth imaging and more-conventional 3D seismic processing.

  1. Comprehensive Non-Destructive Conservation Documentation of Lunar Samples Using High-Resolution Image-Based 3D Reconstructions and X-Ray CT Data

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2015-01-01

    Established contemporary conservation methods within the fields of Natural and Cultural Heritage encourage an interdisciplinary approach to preservation of heritage material (both tangible and intangible) that holds "Outstanding Universal Value" for our global community. NASA's lunar samples were acquired from the moon for the primary purpose of intensive scientific investigation. These samples, however, also invoke cultural significance, as evidenced by the millions of people per year that visit lunar displays in museums and heritage centers around the world. Being both scientifically and culturally significant, the lunar samples require a unique conservation approach. Government mandate dictates that NASA's Astromaterials Acquisition and Curation Office develop and maintain protocols for "documentation, preservation, preparation and distribution of samples for research, education and public outreach" for both current and future collections of astromaterials. Documentation, considered the first stage within the conservation methodology, has evolved many new techniques since curation protocols for the lunar samples were first implemented, and the development of new documentation strategies for current and future astromaterials is beneficial to keeping curation protocols up to date. We have developed and tested a comprehensive non-destructive documentation technique using high-resolution image-based 3D reconstruction and X-ray CT (XCT) data in order to create interactive 3D models of lunar samples that would ultimately be served to both researchers and the public. These data enhance preliminary scientific investigations including targeted sample requests, and also provide a new visual platform for the public to experience and interact with the lunar samples. We intend to serve these data as they are acquired on NASA's Astromaterials Acquisistion and Curation website at http://curator.jsc.nasa.gov/. Providing 3D interior and exterior documentation of astromaterial

  2. Microscopic Image 3D Reconstruction Based on Laplace Algorithm%基于Laplace的显微物体三维形貌快速重构

    Institute of Scientific and Technical Information of China (English)

    胡致杰; 何晓昀

    2015-01-01

    显微物体表面三维形貌观测与分析在工业、医学、艺术、计算机等领域具有越来越重要的应用价值,也发挥着越来越重要的作用.该文以能够快速、廉价、非接触地重构出显微物体三维形貌为目标,提出一种基于改进的Laplace算子值进行图像的聚焦评价、高度测量和三维重构的方法,该方法通过计算每个像素点的改进Laplace值,帮助用户快速实现显微物体的三维形貌重构,将显微物体的形貌观测和分析从二维扩展到三维.大量的实验结果和用户体验反馈信息进一步验证了该方法的有效性和实用性.%The observation and analysis of three-dimensional surface of the micro object is more and more important in the ifelds of industry,medicine,art,computer and so on.We propose a efifcient algorithm based on improved Laplace operator, aiming at the effective 3D-reconstruction of the microscopic object surface images fast,cheaply and non-contactly,the algorithm can evaluate focus,gain focus height and reconstruct 3D shape by calculating the improved Laplace operator value of each pixel,extend the morphology observations and analysis of microscopic object from 2D to 3D.A large number of experimental results and a relative user experience demonstrate the effectiveness and application values of the algorithm.

  3. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  4. Individualized directional microphone optimization in hearing aids based on reconstructing the 3D geometry of the head and ear from 2D images

    DEFF Research Database (Denmark)

    Harder, Stine

    aid. We verify the directional filters optimized from simulated HRTFs based on a listener-specific head model against two set of optimal filters. The first set of optimal filters is calculated from HRTFs measured on a 3D printed version of the head model. The second set of optimal filters...... individuals who deviate from an average of the population could benefit from having individualized filters. We developed a pipeline for 3D printing of full size human heads. The 3D printed head facilitated the second verification step, which revealed a 0:3 dB reduction from optimal to simulated directional...... filters. This indicates that the simulation are more similar to measurements on the 3D printed head than measurements on the human subject. We suggest that the larger difference between simulation and human measurements could arise due to small geometrical errors in the head model or due to differences...

  5. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  6. Image-based 3D canopy reconstruction to determine potential productivity in complex multi-species crop systems.

    Science.gov (United States)

    Burgess, Alexandra J; Retkute, Renata; Pound, Michael P; Mayes, Sean; Murchie, Erik H

    2017-03-01

    Intercropping systems contain two or more species simultaneously in close proximity. Due to contrasting features of the component crops, quantification of the light environment and photosynthetic productivity is extremely difficult. However it is an essential component of productivity. Here, a low-tech but high-resolution method is presented that can be applied to single- and multi-species cropping systems to facilitate characterization of the light environment. Different row layouts of an intercrop consisting of Bambara groundnut ( Vigna subterranea ) and proso millet ( Panicum miliaceum ) have been used as an example and the new opportunities presented by this approach have been analysed. Three-dimensional plant reconstruction, based on stereo cameras, combined with ray tracing was implemented to explore the light environment within the Bambara groundnut-proso millet intercropping system and associated monocrops. Gas exchange data were used to predict the total carbon gain of each component crop. The shading influence of the tall proso millet on the shorter Bambara groundnut results in a reduction in total canopy light interception and carbon gain. However, the increased leaf area index (LAI) of proso millet, higher photosynthetic potential due to the C4 pathway and sub-optimal photosynthetic acclimation of Bambara groundnut to shade means that increasing the number of rows of millet will lead to greater light interception and carbon gain per unit ground area, despite Bambara groundnut intercepting more light per unit leaf area. Three-dimensional reconstruction combined with ray tracing provides a novel, accurate method of exploring the light environment within an intercrop that does not require difficult measurements of light interception and data-intensive manual reconstruction, especially for such systems with inherently high spatial possibilities. It provides new opportunities for calculating potential productivity within multi-species cropping systems

  7. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  8. Autostereoscopic 3D projection display based on two lenticular sheets

    Institute of Scientific and Technical Information of China (English)

    Lin Qi; Qionghua Wang; Jiangyong Luo; Aihong Wang; Dong Liang

    2012-01-01

    We propose an autostereoscopic three-dimensional (3D) projection display.The display consists of four projectors,a projection screen,and two lenticular sheets.The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography.A 50-inch autostereoscopic 3D projection display prototype is developed.The normalized luminance distributions of viewing zones from the simulation and the measurement are given.Results agree well with the designed values.The proposed prototype presents full-resolution 3D images similar to the conventional prototype based on two parallax barriers.Moreover,the proposed prototype shows considerably higher brightness and efficiency of light utilization.

  9. Monopulse radar 3-D imaging and application in terminal guidance radar

    Science.gov (United States)

    Xu, Hui; Qin, Guodong; Zhang, Lina

    2007-11-01

    Monopulse radar 3-D imaging integrates ISAR, monopulse angle measurement and 3-D imaging processing to obtain the 3-D image which can reflect the real size of a target, which means any two of the three measurement parameters, namely azimuth difference beam elevation difference beam and radial range, can be used to form 3-D image of 3-D object. The basic principles of Monopulse radar 3-D imaging are briefly introduced, the effect of target carriage changes(including yaw, pitch, roll and movement of target itself) on 3-D imaging and 3-D moving compensation based on the chirp rate μ and Doppler frequency f d are analyzed, and the application of monopulse radar 3-D imaging to terminal guidance radars is forecasted. The computer simulation results show that monopulse radar 3-D imaging has apparent advantages in distinguishing a target from overside interference and precise assault on vital part of a target, and has great importance in terminal guidance radars.

  10. SU-E-T-300: Dosimetric Comparision of 4D Radiation Therapy and 3D Radiation Therapy for the Liver Tumor Based On 4D Medical Image

    Energy Technology Data Exchange (ETDEWEB)

    Ma, C; Yin, Y [Shandong Tumor Hospital, Jinan, Shandong Provice (China)

    2015-06-15

    Purpose: The purpose of this work was to determine the dosimetric benefit to normal tissues by tracking liver tumor dose in four dimensional radiation therapy (4DRT) on ten phases of four dimensional computer tomagraphy(4DCT) images. Methods: Target tracking each phase with the beam aperture for ten liver cancer patients were converted to cumulative plan and compared to the 3D plan with a merged target volume based on 4DCT image in radiation treatment planning system (TPS). The change in normal tissue dose was evaluated in the plan by using the parameters V5, V10, V15, V20,V25, V30, V35 and V40 (volumes receiving 5, 10, 15, 20, 25, 30, 35 and 40Gy, respectively) in the dose-volume histogram for the liver; mean dose for the following structures: liver, left kidney and right kidney; and maximum dose for the following structures: bowel, duodenum, esophagus, stomach and heart. Results: There was significant difference between 4D PTV(average 115.71cm3 )and ITV(169.86 cm3). When the planning objective is 95% volume of PTV covered by the prescription dose, the mean dose for the liver, left kidney and right kidney have an average decrease 23.13%, 49.51%, and 54.38%, respectively. The maximum dose for bowel, duodenum,esophagus, stomach and heart have an average decrease 16.77%, 28.07%, 24.28%, 4.89%, and 4.45%, respectively. Compared to 3D RT, radiation volume for the liver V5, V10, V15, V20, V25, V30, V35 and V40 by using the 4D plans have a significant decrease(P≤0.05). Conclusion: The 4D plan method creates plans that permit better sparing of the normal structures than the commonly used ITV method, which delivers the same dosimetric effects to the target.

  11. A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement

    Science.gov (United States)

    Vornicu, I.; Carmona-Galán, R.; Rodríguez-Vázquez, Á.

    2015-03-01

    The design and measurements of a CMOS 64 × 64 Single-Photon Avalanche-Diode (SPAD) array with in-pixel Time-to-Digital Converter (TDC) are presented. This paper thoroughly describes the imager at architectural and circuit level with particular emphasis on the characterization of the SPAD-detector ensemble. It is aimed to 2D imaging and 3D image reconstruction in low light environments. It has been fabricated in a standard 0.18μm CMOS process, i. e. without high voltage or low noise features. In these circumstances, we are facing a high number of dark counts and low photon detection efficiency. Several techniques have been applied to ensure proper functionality, namely: i) time-gated SPAD front-end with fast active-quenching/recharge circuit featuring tunable dead-time, ii) reverse start-stop scheme, iii) programmable time resolution of the TDC based on a novel pseudo-differential voltage controlled ring oscillator with fast start-up, iv) a global calibration scheme against temperature and process variation. Measurements results of individual SPAD-TDC ensemble jitter, array uniformity and time resolution programmability are also provided.

  12. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    Science.gov (United States)

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision.

  13. Multimodal-3D imaging based on μMRI and μCT techniques bridges the gap with histology in visualization of the bone regeneration process.

    Science.gov (United States)

    Sinibaldi, R; Conti, A; Sinjari, B; Spadone, S; Pecci, R; Palombo, M; Komlev, V S; Ortore, M G; Tromba, G; Capuani, S; De Luca, F; Caputi, S; Traini, T; Della Penna, S

    2017-06-07

    Bone repair/regeneration is usually investigated through x-ray computed microtomography (μCT) supported by histology of extracted samples, to analyze biomaterial structure and new bone formation processes. Magnetic Resonance Imaging (μMRI) shows a richer tissue contrast than μCT, despite at lower resolution, and could be combined with μCT in the perspective of conducting non-destructive 3D investigations of bone. A pipeline designed to combine μMRI and μCT images of bone samples is here described and applied on samples of extracted human jawbone core following bone graft. We optimized the co-registration procedure between μCT and μMRI images to avoid bias due to the different resolutions and contrasts. Furthermore, we used an Adaptive Multivariate Clustering, grouping homologous voxels in the co-registered images, to visualize different tissue types within a fused 3D metastructure. The tissue grouping matched the 2D histology applied only on one slice, thus extending the histology labelling in 3D. Specifically, in all samples we could separate and map two types of regenerated bone, calcified tissue, soft tissues and/or fat and marrow space. Remarkably, μMRI and μCT alone were not able to separate the two types of regenerated bone. Finally, we computed volumes of each tissue in the 3D metastructures, which might be exploited by quantitative simulation. The 3D metastructure obtained through our pipeline represents a first step to bridge the gap between the quality of information obtained from 2D optical microscopy and the 3D mapping of the bone tissue heterogeneity, and could allow researchers and clinicians to non-destructively characterize and follow-up bone regeneration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  15. 3D tomographic breast imaging in-vivo using a handheld optical imager

    Science.gov (United States)

    Erickson, Sarah J.; Martinez, Sergio; Gonzalez, Jean; Roman, Manuela; Nunez, Annie; Godavarty, Anuradha

    2011-02-01

    Hand-held optical imagers are currently developed toward clinical imaging of breast tissue. However, the hand-held optical devices developed to are not able to coregister the image to the tissue geometry for 3D tomography. We have developed a hand-held optical imager which has demonstrated automated coregistered imaging and 3D tomography in phantoms, and validated coregistered imaging in normal human subjects. Herein, automated coregistered imaging is performed in a normal human subject with a 0.45 cm3 spherical target filled with 1 μM indocyanine green (fluorescent contrast agent) placed superficially underneath the flap of the breast tissue. The coregistered image data is used in an approximate extended Kalman filter (AEKF) based reconstruction algorithm to recover the 3D location of the target within the breast tissue geometry. The results demonstrate the feasibility of performing 3D tomographic imaging and recovering a fluorescent target in breast tissue of a human subject for the first time using a hand-held based optical imager. The significance of this work is toward clinical imaging of breast tissue for cancer diagnostics and therapy monitoring.

  16. Scalable 3D GIS environment managed by 3D-XML-based modeling

    Science.gov (United States)

    Shi, Beiqi; Rui, Jianxun; Chen, Neng

    2008-10-01

    Nowadays, the namely 3D GIS technologies become a key factor in establishing and maintaining large-scale 3D geoinformation services. However, with the rapidly increasing size and complexity of the 3D models being acquired, a pressing needed for suitable data management solutions has become apparent. This paper outlines that storage and exchange of geospatial data between databases and different front ends like 3D models, GIS or internet browsers require a standardized format which is capable to represent instances of 3D GIS models, to minimize loss of information during data transfer and to reduce interface development efforts. After a review of previous methods for spatial 3D data management, a universal lightweight XML-based format for quick and easy sharing of 3D GIS data is presented. 3D data management based on XML is a solution meeting the requirements as stated, which can provide an efficient means for opening a new standard way to create an arbitrary data structure and share it over the Internet. To manage reality-based 3D models, this paper uses 3DXML produced by Dassault Systemes. 3DXML uses opening XML schemas to communicate product geometry, structure and graphical display properties. It can be read, written and enriched by standard tools; and allows users to add extensions based on their own specific requirements. The paper concludes with the presentation of projects from application areas which will benefit from the functionality presented above.

  17. 基于几何图像滤波的3D人脸识别算法%3D face recognition by applying filter based on geometry image

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 达飞鹏

    2012-01-01

    Aiming at the 3D face recognition under expression variation, a feature extraction method by applying filter on geometry image is proposed. The design for the optimal convolution filter is presented based on the distribution function of filtered features. First,after objectively mapping facial mesh into square domain based on mesh parameterization, a 2D geometry image with 3D shape is obtained by linear interpolation. Then, the entire images in the training set are segmented into patches which are used for the differential evolution algorithm to design the optimal convolution filters. Finally, the similarity scores between local features are computed by applying these filters on corresponding patches, and the final decision is made by combining results of these scores. The experimental results from FRGC ( face recognition grand challenge) v2 (version 2) databases show that both accuracy and robustness are improved by applying filter on geometry image.%针对表情变化下的三维人脸识别问题,提出了一种基于几何图像滤波的特征提取方法,并根据样本图像滤波后的特征分布函数给出最优卷积滤波器的设计过程.首先,利用网格平面参数化方法,将人脸网格映射到边界为正四边形的平面区域内,经过线性插值采样得到具有三维形状的二维几何图像;然后,将整体几何图像切割成局部分块图像的集合,在每组局部分块图像构成的训练样本库中利用差分进化算法对滤波器进行优化设计;最后,利用训练得到的最优滤波器提取对应分块图像的局部特征并计算相似度,将相似度得分融合,即可得到最终识别结果.利用FRGC v2人脸数据库进行实验验证,结果表明,使用几何图像滤波能显著提高算法的精度和鲁棒性.

  18. WE-G-207-06: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Physical Phantom and Clinical Patient Images

    Energy Technology Data Exchange (ETDEWEB)

    Dhou, S; Cai, W; Hurwitz, M; Rottmann, J; Myronakis, M; Cifter, F; Berbeco, R; Lewis, J [Brigham and Women’s Hospital, Boston, MA (United States); Williams, C [Harvard Medical School, Cambridge, MA (United States); Mishra, P [Varian Medical Systems, Palo Alto, CA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States)

    2015-06-15

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential

  19. Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array.

    Science.gov (United States)

    Kim, Do-Hyeong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Jeong, Ji-Seong; Lee, Jae-Won; Kim, Kyung-Ah; Kim, Nam; Yoo, Kwan-Hee

    2013-12-01

    This paper proposes an open computer language (OpenCL) parallel processing method to generate the elemental image arrays (EIAs) for hexagonal lens array from a three-dimensional (3D) object such as a volume data. Hexagonal lens array has a higher fill factor compared to the rectangular lens array case; however, each pixel of an elemental image should be determined to belong to the single hexagonal lens. Therefore, generation for the entire EIA requires very large computations. The proposed method reduces processing time for the EIAs for a given hexagonal lens array. By using the proposed image space parallel processing (ISPP) method, it can enhance the processing speed that generates the 3D display of real-time interactive integral imaging for hexagonal lens array. In our experiment, we implemented the EIAs for hexagonal lens array in real-time and obtained a good processing time for a large of volume data for multiple cases of lens arrays.

  20. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    Science.gov (United States)

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  1. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV Digital Images

    Directory of Open Access Journals (Sweden)

    Zarnowski Aleksander

    2015-12-01

    Full Text Available Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV. Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition.

  2. Diffusible iodine-based contrast-enhanced computed tomography (diceCT): an emerging tool for rapid, high-resolution, 3-D imaging of metazoan soft tissues.

    Science.gov (United States)

    Gignac, Paul M; Kley, Nathan J; Clarke, Julia A; Colbert, Matthew W; Morhardt, Ashley C; Cerio, Donald; Cost, Ian N; Cox, Philip G; Daza, Juan D; Early, Catherine M; Echols, M Scott; Henkelman, R Mark; Herdina, A Nele; Holliday, Casey M; Li, Zhiheng; Mahlow, Kristin; Merchant, Samer; Müller, Johannes; Orsbon, Courtney P; Paluh, Daniel J; Thies, Monte L; Tsai, Henry P; Witmer, Lawrence M

    2016-06-01

    Morphologists have historically had to rely on destructive procedures to visualize the three-dimensional (3-D) anatomy of animals. More recently, however, non-destructive techniques have come to the forefront. These include X-ray computed tomography (CT), which has been used most commonly to examine the mineralized, hard-tissue anatomy of living and fossil metazoans. One relatively new and potentially transformative aspect of current CT-based research is the use of chemical agents to render visible, and differentiate between, soft-tissue structures in X-ray images. Specifically, iodine has emerged as one of the most widely used of these contrast agents among animal morphologists due to its ease of handling, cost effectiveness, and differential affinities for major types of soft tissues. The rapid adoption of iodine-based contrast agents has resulted in a proliferation of distinct specimen preparations and scanning parameter choices, as well as an increasing variety of imaging hardware and software preferences. Here we provide a critical review of the recent contributions to iodine-based, contrast-enhanced CT research to enable researchers just beginning to employ contrast enhancement to make sense of this complex new landscape of methodologies. We provide a detailed summary of recent case studies, assess factors that govern success at each step of the specimen storage, preparation, and imaging processes, and make recommendations for standardizing both techniques and reporting practices. Finally, we discuss potential cutting-edge applications of diffusible iodine-based contrast-enhanced computed tomography (diceCT) and the issues that must still be overcome to facilitate the broader adoption of diceCT going forward.

  3. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  4. 3D Images of Materials Structures Processing and Analysis

    CERN Document Server

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  5. Visualization and Analysis of 3D Microscopic Images

    Science.gov (United States)

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  6. 3D Image Reconstruction: Determination of Pattern Orientation

    Energy Technology Data Exchange (ETDEWEB)

    Blankenbecler, Richard

    2003-03-13

    The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.

  7. Volumetric ultrasound panorama based on 3D SIFT.

    Science.gov (United States)

    Ni, Dong; Qul, Yingge; Yang, Xuan; Chui, Yim Pan; Wong, Tien-Tsin; Ho, Simon S M; Heng, Pheng Ann

    2008-01-01

    The reconstruction of three-dimensional (3D) ultrasound panorama from multiple ultrasound volumes can provide a wide field of view for better clinical diagnosis. Registration of ultrasound volumes has been a key issue for the success of this panoramic process. In this paper, we propose a method to register and stitch ultrasound volumes, which are scanned by dedicated ultrasound probe, based on an improved 3D Scale Invariant Feature Transform (SIFT) algorithm. We propose methods to exclude artifacts from ultrasound images in order to improve the overall performance in 3D feature point extraction and matching. Our method has been validated on both phantom and clinical data sets of human liver. Experimental results show the effectiveness and stability of our approach, and the precision of our method is comparable to that of the position tracker based registration.

  8. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  9. Surface Explorations: 3D Moving Images as Cartographies of Time.

    NARCIS (Netherlands)

    Verhoeff, N.

    2016-01-01

    Moving images of travel and exploration have a long history. In this essay I will examine how the trope of navigation in 3D moving images can work towards an intimate and haptic encounter with other times and other places – elsewhen and elsewhere. The particular navigational construction of space in

  10. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  11. High Resolution 3D Radar Imaging of Comet Interiors

    Science.gov (United States)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  12. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  13. Military efforts in nanosensors, 3D printing, and imaging detection

    Science.gov (United States)

    Edwards, Eugene; Booth, Janice C.; Roberts, J. Keith; Brantley, Christina L.; Crutcher, Sihon H.; Whitley, Michael; Kranz, Michael; Seif, Mohamed; Ruffin, Paul

    2017-04-01

    A team of researchers and support organizations, affiliated with the Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC), has initiated multidiscipline efforts to develop nano-based structures and components for advanced weaponry, aviation, and autonomous air/ground systems applications. The main objective of this research is to exploit unique phenomena for the development of novel technology to enhance warfighter capabilities and produce precision weaponry. The key technology areas that the authors are exploring include nano-based sensors, analysis of 3D printing constituents, and nano-based components for imaging detection. By integrating nano-based devices, structures, and materials into weaponry, the Army can revolutionize existing (and future) weaponry systems by significantly reducing the size, weight, and cost. The major research thrust areas include the development of carbon nanotube sensors to detect rocket motor off-gassing; the application of current methodologies to assess materials used for 3D printing; and the assessment of components to improve imaging seekers. The status of current activities, associated with these key areas and their implementation into AMRDEC's research, is outlined in this paper. Section #2 outlines output data, graphs, and overall evaluations of carbon nanotube sensors placed on a 16 element chip and exposed to various environmental conditions. Section #3 summarizes the experimental results of testing various materials and resulting components that are supplementary to additive manufacturing/fused deposition modeling (FDM). Section #4 recapitulates a preliminary assessment of the optical and electromechanical components of seekers in an effort to propose components and materials that can work more effectively.

  14. A 3D high resolution ex vivo white matter atlas of the common squirrel monkey (saimiri sciureus) based on diffusion tensor imaging

    Science.gov (United States)

    Gao, Yurui; Parvathaneni, Prasanna; Schilling, Kurt G.; Wang, Feng; Stepniewska, Iwona; Xu, Zhoubing; Choe, Ann S.; Ding, Zhaohua; Gore, John C.; Chen, Li min; Landman, Bennett A.; Anderson, Adam W.

    2016-03-01

    Modern magnetic resonance imaging (MRI) brain atlases are high quality 3-D volumes with specific structures labeled in the volume. Atlases are essential in providing a common space for interpretation of results across studies, for anatomical education, and providing quantitative image-based navigation. Extensive work has been devoted to atlas construction for humans, macaque, and several non-primate species (e.g., rat). One notable gap in the literature is the common squirrel monkey - for which the primary published atlases date from the 1960's. The common squirrel monkey has been used extensively as surrogate for humans in biomedical studies, given its anatomical neuro-system similarities and practical considerations. This work describes the continued development of a multi-modal MRI atlas for the common squirrel monkey, for which a structural imaging space and gray matter parcels have been previously constructed. This study adds white matter tracts to the atlas. The new atlas includes 49 white matter (WM) tracts, defined using diffusion tensor imaging (DTI) in three animals and combines these data to define the anatomical locations of these tracks in a standardized coordinate system compatible with previous development. An anatomist reviewed the resulting tracts and the inter-animal reproducibility (i.e., the Dice index of each WM parcel across animals in common space) was assessed. The Dice indices range from 0.05 to 0.80 due to differences of local registration quality and the variation of WM tract position across individuals. However, the combined WM labels from the 3 animals represent the general locations of WM parcels, adding basic connectivity information to the atlas.

  15. A 3D high resolution ex vivo white matter atlas of the common squirrel monkey (Saimiri sciureus) based on diffusion tensor imaging.

    Science.gov (United States)

    Gao, Yurui; Parvathaneni, Prasanna; Schilling, Kurt G; Wang, Feng; Stepniewska, Iwona; Xu, Zhoubing; Choe, Ann S; Ding, Zhaohua; Gore, John C; Chen, Li Min; Landman, Bennett A; Anderson, Adam W

    2016-02-27

    Modern magnetic resonance imaging (MRI) brain atlases are high quality 3-D volumes with specific structures labeled in the volume. Atlases are essential in providing a common space for interpretation of results across studies, for anatomical education, and providing quantitative image-based navigation. Extensive work has been devoted to atlas construction for humans, macaque, and several non-primate species (e.g., rat). One notable gap in the literature is the common squirrel monkey - for which the primary published atlases date from the 1960's. The common squirrel monkey has been used extensively as surrogate for humans in biomedical studies, given its anatomical neuro-system similarities and practical considerations. This work describes the continued development of a multi-modal MRI atlas for the common squirrel monkey, for which a structural imaging space and gray matter parcels have been previously constructed. This study adds white matter tracts to the atlas. The new atlas includes 49 white matter (WM) tracts, defined using diffusion tensor imaging (DTI) in three animals and combines these data to define the anatomical locations of these tracks in a standardized coordinate system compatible with previous development. An anatomist reviewed the resulting tracts and the inter-animal reproducibility (i.e., the Dice index of each WM parcel across animals in common space) was assessed. The Dice indices range from 0.05 to 0.80 due to differences of local registration quality and the variation of WM tract position across individuals. However, the combined WM labels from the 3 animals represent the general locations of WM parcels, adding basic connectivity information to the atlas.

  16. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  17. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    Science.gov (United States)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  18. Projection-slice theorem based 2D-3D registration

    Science.gov (United States)

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  19. Automated segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach

    Science.gov (United States)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2017-02-01

    We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using "convolution" and "deconvolution" networks to achieve the conventional "coarse recognition" and "fine extraction" functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed.

  20. GPU-based 3D lower tree wavelet video encoder

    Science.gov (United States)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Drummond, Leroy Anthony; Migallón, Hector

    2013-12-01

    The 3D-DWT is a mathematical tool of increasing importance in those applications that require an efficient processing of huge amounts of volumetric info. Other applications like professional video editing, video surveillance applications, multi-spectral satellite imaging, HQ video delivery, etc, would rather use 3D-DWT encoders to reconstruct a frame as fast as possible. In this article, we introduce a fast GPU-based encoder which uses 3D-DWT transform and lower trees. Also, we present an exhaustive analysis of the use of GPU memory. Our proposal shows good trade off between R/D, coding delay (as fast as MPEG-2 for High definition) and memory requirements (up to 6 times less memory than x264).

  1. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  2. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    OpenAIRE

    Seniutinas Gediminas; Balčytis Armandas; Reklaitis Ignas; Chen Feng; Davis Jeffrey; David Christian; Juodkazis Saulius

    2017-01-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of ...

  3. 3D imaging of semiconductor components by discrete laminography

    Energy Technology Data Exchange (ETDEWEB)

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  4. Automatic airline baggage counting using 3D image segmentation

    Science.gov (United States)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  5. 基于多视角三维扫描数据的图像配准%Image Registration Based on Multi View 3D Scanning Data

    Institute of Scientific and Technical Information of China (English)

    刘博文; 童立靖

    2016-01-01

    三维激光扫描是上世纪90年代发展起来的一门新兴技术。考虑到测量设备测量范围的限制和被测物体外形复杂性等,单次扫描无法获得完整的、精确的模型。从而需要从不同的视点对被测物体进行扫描。一些高精度的扫描仪除了可以获得三维点云数据,还可以获得一种散乱的纹理图像。本文根据纹理映射的原理和逆向工程的思维,将扫描仪得到的点云数据和纹理数据通过一系列步骤提取出图像信息,再利用仿射变换的原理计算出二维投影图像的信息,然后准确地提取出二维投影图像。最后,采用基于SURF特征点提取算法,对提取出的两幅二维投影图像进行了配准。%Three dimensional laser scanning is a new technology developed in the last century in 90s. Taking into account the limits of measurement equipment measurement range and the complexity of the measured object, single scan can not get the complete and accurate model. So it is needed to scan the measured object from different viewpoints. Some high precision scanner can not only get the 3D point cloud data, but also get a scattered texture image. This paper is based on the principle of texture mapping and reverse engineering thinking. Through a series of steps, the image in-formation is extracted from the point cloud data and texture data obtained by the scanner. Then, based on the principle of affine transformation, the information of 2D projection image is calculated. And the 2D projection image is extracted accurately. Finally, based on the SURF feature point extraction algorithm, two 2D projection images are registered.

  6. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    3-D blood flow quantification with high spatial and temporal resolution would strongly benefit clinical research on cardiovascular pathologies. Ultrasonic velocity techniques are known for their ability to measure blood flow with high precision at high spatial and temporal resolution. However......, current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI......) technique is extended to estimate the 3-D velocity components inside a volume at high temporal resolutions (

  7. 3D mapping from high resolution satellite images

    Science.gov (United States)

    Goulas, D.; Georgopoulos, A.; Sarakenos, A.; Paraschou, Ch.

    2013-08-01

    In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.

  8. Integration of 3D scale-based pseudo-enhancement correction and partial volume image segmentation for improving electronic colon cleansing in CT colonograpy.

    Science.gov (United States)

    Zhang, Hao; Li, Lihong; Zhu, Hongbin; Han, Hao; Song, Bowen; Liang, Zhengrong

    2014-01-01

    Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structures. However, the high-density contrast agents tend to introduce pseudo-enhancement (PE) effect on neighboring soft tissues and elevate their observed CT attenuation value toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) since the pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a 3D scale-based PE correction into our previous ECC pipeline based on the maximum a posteriori expectation-maximization partial volume (PV) segmentation. The newly proposed ECC scheme takes into account both the PE and PV effects that commonly appear in CTC images. We evaluated the new scheme on 40 patient CTC scans, both qualitatively through display of segmentation results, and quantitatively through radiologists' blind scoring (human observer) and computer-aided detection (CAD) of colon polyps (computer observer). Performance of the presented algorithm has shown consistent improvements over our previous ECC pipeline, especially for the detection of small polyps submerged in the contrast agents. The CAD results of polyp detection showed that 4 more submerged polyps were detected for our new ECC scheme over the previous one.

  9. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  10. 3-D model-based tracking for UAV indoor localization.

    Science.gov (United States)

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  11. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  12. Multicore-based 3D-DWT video encoder

    Science.gov (United States)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector

    2013-12-01

    Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.

  13. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  14. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  15. 3D- VISUALIZATION BY RAYTRACING IMAGE SYNTHESIS ON GPU

    Directory of Open Access Journals (Sweden)

    Al-Oraiqat Anas M.

    2016-06-01

    Full Text Available This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU. The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 % is spent for transfer of data between the central processing unit and GPU for calculations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of calculations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization.

  16. Direct 3D Painting with a Metaball-Based Paintbrush

    Institute of Scientific and Technical Information of China (English)

    WAN Huagen; JIN Xiaogang; BAO Hujun

    2000-01-01

    This paper presents a direct 3D painting algorithm for polygonal models in 3D object-space with a metaball-based paintbrush in virtual environment.The user is allowed to directly manipulate the parameters used to shade the surface of the 3D shape by applying the pigment to its surface with direct 3D manipulation through a 3D flying mouse.

  17. Spectral ladar: towards active 3D multispectral imaging

    Science.gov (United States)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  18. PCaAnalyser: a 2D-image analysis based module for effective determination of prostate cancer progression in 3D culture.

    Directory of Open Access Journals (Sweden)

    Md Tamjidul Hoque

    Full Text Available Three-dimensional (3D in vitro cell based assays for Prostate Cancer (PCa research are rapidly becoming the preferred alternative to that of conventional 2D monolayer cultures. 3D assays more precisely mimic the microenvironment found in vivo, and thus are ideally suited to evaluate compounds and their suitability for progression in the drug discovery pipeline. To achieve the desired high throughput needed for most screening programs, automated quantification of 3D cultures is required. Towards this end, this paper reports on the development of a prototype analysis module for an automated high-content-analysis (HCA system, which allows for accurate and fast investigation of in vitro 3D cell culture models for PCa. The Java based program, which we have named PCaAnalyser, uses novel algorithms that allow accurate and rapid quantitation of protein expression in 3D cell culture. As currently configured, the PCaAnalyser can quantify a range of biological parameters including: nuclei-count, nuclei-spheroid membership prediction, various function based classification of peripheral and non-peripheral areas to measure expression of biomarkers and protein constituents known to be associated with PCa progression, as well as defining segregate cellular-objects effectively for a range of signal-to-noise ratios. In addition, PCaAnalyser architecture is highly flexible, operating as a single independent analysis, as well as in batch mode; essential for High-Throughput-Screening (HTS. Utilising the PCaAnalyser, accurate and rapid analysis in an automated high throughput manner is provided, and reproducible analysis of the distribution and intensity of well-established markers associated with PCa progression in a range of metastatic PCa cell-lines (DU145 and PC3 in a 3D model demonstrated.

  19. Integration of real-time 3D image acquisition and multiview 3D display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  20. The Design and Implementation of 3D Medical Image Reconstruction System Based on VTK and ITK%基于VTK和ITK的3D医学图像重建系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    刘鹰; 韩利凯

    2011-01-01

    三维图像重构是当前数字图像处理领域的一个热点,特别是其在医学图像处理中的应用.VascuView3D是一个基于VTK和ITK的3D医学图像重建系统,该系统实现了体绘制(VR)、表面绘制(SR)和多平面绘制(MPR)等3D视图,以及基于CLUT的三维灰度图像着色.%3D image reconstruction is an attractive Held generally in digital image processing techniques, especially in medical imaging. The design and implementation of a 3D medical image reconstruction system VascuView, which can be used to build 3D images from 2D image slice files produced by CT and MRI devices, is introduced. The volume rendering, surface rendering and Multi -Planar rendering are implemented and lots of the 3D operations such as coloring of 3D image based on CLUT can be performed with this software.

  1. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Science.gov (United States)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  2. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    Science.gov (United States)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  3. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    Science.gov (United States)

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  4. Air-touch interaction system for integral imaging 3D display

    Science.gov (United States)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  5. NeuralNetwork Based 3D Surface Reconstruction

    CERN Document Server

    Joseph, Vincy

    2009-01-01

    This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D) surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  6. 3D plant phenotyping in sunflower using architecture-based organ segmentation from 3D point clouds

    OpenAIRE

    Gélard, William; Burger, Philippe; Casadebaig, Pierre; Langlade, Nicolas; Debaeke, Philippe; Devy, Michel; Herbulot, Ariane

    2016-01-01

    International audience; This paper presents a 3D phenotyping method applied to sunflower, allowing to compute the leaf area of an isolated plant. This is a preliminary step towards the automated monitoring of leaf area and plant growth through the plant life cycle. First, a model-based segmentation method is applied to 3D data derived from RGB images acquired on sunflower plants grown in pots. The RGB image acquisitions are made all around the isolated plant with a single hand-held standard c...

  7. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Science.gov (United States)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  8. Refraction Correction in 3D Transcranial Ultrasound Imaging

    Science.gov (United States)

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  9. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  10. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very

  11. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  12. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  13. 3D Image Fusion to Localise Intercostal Arteries During TEVAR.

    Science.gov (United States)

    Koutouzi, G; Sandström, C; Skoog, P; Roos, H; Falkenberg, M

    2017-01-01

    Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA), but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR). The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT), the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA) and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia.

  14. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Nasehi Tehrani, J; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Guo, X [University of Texas at Dallas, Richardson, TX (United States); Yang, Y [The University of New Mexico, New Mexico, NM (United States)

    2014-06-01

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.

  15. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    Science.gov (United States)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  16. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  17. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  18. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  19. Virtual reality 3D headset based on DMD light modulators

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  20. 3D image registration using a fast noniterative algorithm.

    Science.gov (United States)

    Zhilkin, P; Alexander, M E

    2000-11-01

    This note describes the implementation of a three-dimensional (3D) registration algorithm, generalizing a previous 2D version [Alexander, Int J Imaging Systems and Technology 1999;10:242-57]. The algorithm solves an integrated form of linearized image matching equation over a set of 3D rectangular sub-volumes ('patches') in the image domain. This integrated form avoids numerical instabilities due to differentiation of a noisy image over a lattice, and in addition renders the algorithm robustness to noise. Registration is implemented by first convolving the unregistered images with a set of computationally fast [O(N)] filters, providing four bandpass images for each input image, and integrating the image matching equation over the given patch. Each filter and each patch together provide an independent set of constraints on the displacement field derived by solving a set of linear regression equations. Furthermore, the filters are implemented at a variety of spatial scales, enabling registration parameters at one scale to be used as an input approximation for deriving refined values of those parameters at a finer scale of resolution. This hierarchical procedure is necessary to avoid false matches occurring. Both downsampled and oversampled (undecimating) filtering is implemented. Although the former is computationally fast, it lacks the translation invariance of the latter. Oversampling is required for accurate interpolation that is used in intermediate stages of the algorithm to reconstruct the partially registered from the unregistered image. However, downsampling is useful, and computationally efficient, for preliminary stages of registration when large mismatches are present. The 3D registration algorithm was implemented using a 12-parameter affine model for the displacement: u(x) = Ax + b. Linear interpolation was used throughout. Accuracy and timing results for registering various multislice images, obtained by scanning a melon and human volunteers in various