WorldWideScience

Sample records for 3d image based

  1. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  2. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  3. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  4. 3D Motion Parameters Determination Based on Binocular Sequence Images

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  5. 3D Medical Image Segmentation Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  6. DCT and DST Based Image Compression for 3D Reconstruction

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  7. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  8. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    Science.gov (United States)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  9. Physically based analysis of deformations in 3D images

    Science.gov (United States)

    Nastar, Chahab; Ayache, Nicholas

    1993-06-01

    We present a physically based deformable model which can be used to track and to analyze the non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the natural lengths of the springs is set equal to zero, and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, whatever the amplitude of the deformation. It provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track, and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images.

  10. 3D photoacoustic imaging

    Science.gov (United States)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  11. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  12. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  13. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  14. Intensity-based image registration for 3D spatial compounding using a freehand 3D ultrasound system

    Science.gov (United States)

    Pagoulatos, Niko; Haynor, David R.; Kim, Yongmin

    2002-04-01

    3D spatial compounding involves the combination of two or more 3D ultrasound (US) data sets, acquired under different insonation angles and windows, to form a higher quality 3D US data set. An important requirement for this method to succeed is the accurate registration between the US images used to form the final compounded image. We have developed a new automatic method for rigid and deformable registration of 3D US data sets, acquired using a freehand 3D US system. Deformation is provided by using a 3D thin-plate spline (TPS). Our method is fundamentally different from the previous ones in that the acquired scattered US 2D slices are registered and compounded directly into the 3D US volume. Our approach has several benefits over the traditional registration and spatial compounding methods: (i) we only peform one 3D US reconstruction, for the first acquired data set, therefore we save the computation time required to reconstruct subsequent acquired scans, (ii) for our registration we use (except for the first scan) the acquired high-resolution 2D US images rather than the 3D US reconstruction data which are of lower quality due to the interpolation and potential subsampling associated with 3D reconstruction, and (iii) the scans performed after the first one are not required to follow the typical 3D US scanning protocol, where a large number of dense slices have to be acquired; slices can be acquired in any fashion in areas where compounding is desired. We show that by taking advantage of the similar information contained in adjacent acquired 2D US slices, we can reduce the computation time of linear and nonlinear registrations by a factor of more than 7:1, without compromising registration accuracy. Furthermore, we implemented an adaptive approximation to the 3D TPS with local bilinear transformations allowing additional reduction of the nonlinear registration computation time by a factor of approximately 3.5. Our results are based on a commercially available

  15. Research of Fast 3D Imaging Based on Multiple Mode

    Science.gov (United States)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  16. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  17. 3D Image Sensor based on Parallax Motion

    Directory of Open Access Journals (Sweden)

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  18. Contactless operating table control based on 3D image processing.

    Science.gov (United States)

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  19. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  20. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.

    Science.gov (United States)

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-09-19

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  1. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    Directory of Open Access Journals (Sweden)

    Yingzhi Kan

    2016-09-01

    Full Text Available In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D imaging is proposed that uses a two-dimensional (2-D plane antenna array. First, a two-dimensional fast Fourier transform (FFT is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT combined with 2-D inverse FFT (IFFT is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  2. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    CERN Document Server

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  3. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  4. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  5. 3D Modeling of Transformer Substation Based on Mapping and 2D Images

    Directory of Open Access Journals (Sweden)

    Lei Sun

    2016-01-01

    Full Text Available A new method for building 3D models of transformer substation based on mapping and 2D images is proposed in this paper. This method segments objects of equipment in 2D images by using k-means algorithm in determining the cluster centers dynamically to segment different shapes and then extracts feature parameters from the divided objects by using FFT and retrieves the similar objects from 3D databases and then builds 3D models by computing the mapping data. The method proposed in this paper can avoid the complex data collection and big workload by using 3D laser scanner. The example analysis shows the method can build coarse 3D models efficiently which can meet the requirements for hazardous area classification and constructions representations of transformer substation.

  6. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Science.gov (United States)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  7. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    Science.gov (United States)

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  8. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  9. Image Sequence Fusion and Denoising Based on 3D Shearlet Transform

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2014-01-01

    Full Text Available We propose a novel algorithm for image sequence fusion and denoising simultaneously in 3D shearlet transform domain. In general, the most existing image fusion methods only consider combining the important information of source images and do not deal with the artifacts. If source images contain noises, the noises may be also transferred into the fusion image together with useful pixels. In 3D shearlet transform domain, we propose that the recursive filter is first performed on the high-pass subbands to obtain the denoised high-pass coefficients. The high-pass subbands are then combined to employ the fusion rule of the selecting maximum based on 3D pulse coupled neural network (PCNN, and the low-pass subband is fused to use the fusion rule of the weighted sum. Experimental results demonstrate that the proposed algorithm yields the encouraging effects.

  10. Midsagittal plane extraction from brain images based on 3D SIFT.

    Science.gov (United States)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-21

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  11. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    studies and in vivo. Phantom measurements are compared with their corresponding reference value, whereas the in vivo measurement is validated against the current golden standard for non-invasive blood velocity estimates, based on magnetic resonance imaging (MRI). The study concludes, that a high precision......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges......For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...

  12. Comparison Between Two Generic 3d Building Reconstruction Approaches - Point Cloud Based VS. Image Processing Based

    Science.gov (United States)

    Dahlke, D.; Linkiewicz, M.

    2016-06-01

    This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.

  13. Single-pixel 3D imaging with time-based depth resolution

    CERN Document Server

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  14. A web-based 3D medical image collaborative processing system with videoconference

    Science.gov (United States)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  15. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    Institute of Scientific and Technical Information of China (English)

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  16. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  17. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  18. Reproducibility study of 3D SSFP phase-based brain conductivity imaging

    NARCIS (Netherlands)

    Stehning, C.; Katscher, U.; Keupp, J.

    2012-01-01

    Noninvasive MR-based Electric Properties Tomography (EPT) forms a framework for an accurate determination of local SAR, and may providea diagnostic parameter in oncology. 3D SSFP sequences were found tobe a promising candidate for fast volumetric conductivity imaging. In this work, an in vivo study

  19. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  20. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    Science.gov (United States)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  1. Superimposing of virtual graphics and real image based on 3D CAD information

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  2. [Rapid 2D-3D medical image registration based on CUDA].

    Science.gov (United States)

    Li, Lingzhi; Zou, Beiji

    2014-08-01

    The medical image registration between preoperative three-dimensional (3D) scan data and intraoperative two-dimensional (2D) image is a key technology in the surgical navigation. Most previous methods need to generate 2D digitally reconstructed radiographs (DRR) images from the 3D scan volume data, then use conventional image similarity function for comparison. This procedure includes a large amount of calculation and is difficult to archive real-time processing. In this paper, with using geometric feature and image density mixed characteristics, we proposed a new similarity measure function for fast 2D-3D registration of preoperative CT and intraoperative X-ray images. This algorithm is easy to implement, and the calculation process is very short, while the resulting registration accuracy can meet the clinical use. In addition, the entire calculation process is very suitable for highly parallel numerical calculation by using the algorithm based on CUDA hardware acceleration to satisfy the requirement of real-time application in surgery.

  3. Sample based 3D face reconstruction from a single frontal image by adaptive locally linear embedding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian; ZHUANG Yue-ting

    2007-01-01

    In this paper, we propose a highly automatic approach for 3D photorealistic face reconstruction from a single frontal image. The key point of our work is the implementation of adaptive manifold learning approach. Beforehand, an active appearance model (AAM) is trained for automatic feature extraction and adaptive locally linear embedding (ALLE) algorithm is utilized to reduce the dimensionality of the 3D database. Then, given an input frontal face image, the corresponding weights between 3D samples and the image are synthesized adaptively according to the AAM selected facial features. Finally, geometry reconstruction is achieved by linear weighted combination of adaptively selected samples. Radial basis function (RBF) is adopted to map facial texture from the frontal image to the reconstructed face geometry. The texture of invisible regions between the face and the ears is interpolated by sampling from the frontal image. This approach has several advantages: (1) Only a single frontal face image is needed for highly automatic face reconstruction; (2) Compared with former works, our reconstruction approach provides higher accuracy; (3) Constraint based RBF texture mapping provides natural appearance for reconstructed face.

  4. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    Science.gov (United States)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  5. 3-D ultrasonic strain imaging based on a linear scanning system.

    Science.gov (United States)

    Huang, Qinghua; Xie, Bo; Ye, Pengfei; Chen, Zhaohong

    2015-02-01

    This paper introduces a 3-D strain imaging method based on a freehand linear scanning mode. We designed a linear sliding track with a position sensor and a height-adjustable holder to constrain the movement of an ultrasound probe in a freehand manner. When moving the probe along the sliding track, the corresponding positional measures for the probe are transmitted via a wireless communication module based on Bluetooth in real time. In a single examination, the probe is scanned in two sweeps in which the height of the probe is adjusted by the holder to collect the pre- and postcompression radio-frequency echoes, respectively. To generate a 3-D strain image, a volume cubic in which the voxels denote relative strains for tissues is defined according to the range of the two sweeps. With respect to the post-compression frames, several slices in the volume are determined and the pre-compression frames are re-sampled to precisely correspond to the post-compression frames. Thereby, a strain estimation method based on minimizing a cost function using dynamic programming is used to obtain the 2-D strain image for each pair of frames from the re-sampled pre-compression sweep and the post-compression sweep, respectively. A software system is developed for volume reconstruction, visualization, and measurement of the 3-D strain images. The experimental results show that high-quality 3-D strain images of phantom and human tissues can be generated by the proposed method, indicating that the proposed system can be applied for real clinical applications (e.g., musculoskeletal assessments).

  6. Fusing Multiscale Charts into 3D ENC Systems Based on Underwater Topography and Remote Sensing Image

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2015-01-01

    Full Text Available The purpose of this study is to propose an approach to fuse multiscale charts into three-dimensional (3D electronic navigational chart (ENC systems based on underwater topography and remote sensing image. This is the first time that the fusion of multiscale standard ENCs in the 3D ENC system has been studied. First, a view-dependent visualization technology is presented for the determination of the display condition of a chart. Second, a map sheet processing method is described for dealing with the map sheet splice problem. A process order called “3D order” is designed to adapt to the characteristics of the chart. A map sheet clipping process is described to deal with the overlap between the adjacent map sheets. And our strategy for map sheet splice is proposed. Third, the rendering method for ENC objects in the 3D ENC system is introduced. Fourth, our picking-up method for ENC objects is proposed. Finally, we implement the above methods in our system: automotive intelligent chart (AIC 3D electronic chart display and information systems (ECDIS. And our method can handle the fusion problem well.

  7. Single Camera 3-D Coordinate Measuring System Based on Optical Probe Imaging

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new vision coordinate measuring system——single camera 3-D coordinate measuring system based on optical probe imaging is presented. A new idea in vision coordinate measurement is proposed. A linear model is deduced which can distinguish six freedom degrees of optical probe to realize coordinate measurement of the object surface. The effects of some factors on the resolution of the system are analyzed. The simulating experiments have shown that the system model is available.

  8. IMAGE-BASED AIRBORNE LiDAR POINT CLOUD ENCODING FOR 3D BUILDING MODEL RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Y.-C. Chen

    2016-06-01

    Full Text Available With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show

  9. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    Science.gov (United States)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  10. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... Field II simulations and measurements with the ultrasound research scanner SARUS and a 3.5MHz 1024 element 2-D transducer array. In all investigations, 3-D synthetic aperture imaging achieved a smaller main-lobe, lower sidelobes, higher contrast, and better signal to noise ratio than parallel...

  11. Internet2-based 3D PET image reconstruction using a PC cluster.

    Science.gov (United States)

    Shattuck, D W; Rapela, J; Asma, E; Chatzioannou, A; Qi, J; Leahy, R M

    2002-08-07

    We describe an approach to fast iterative reconstruction from fully three-dimensional (3D) PET data using a network of PentiumIII PCs configured as a Beowulf cluster. To facilitate the use of this system, we have developed a browser-based interface using Java. The system compresses PET data on the user's machine, sends these data over a network, and instructs the PC cluster to reconstruct the image. The cluster implements a parallelized version of our preconditioned conjugate gradient method for fully 3D MAP image reconstruction. We report on the speed-up factors using the Beowulf approach and the impacts of communication latencies in the local cluster network and the network connection between the user's machine and our PC cluster.

  12. Pico-projector-based optical sectioning microscopy for 3D chlorophyll fluorescence imaging of mesophyll cells

    Science.gov (United States)

    Chen, Szu-Yu; Hsu, Yu John; Yeh, Chia-Hua; Chen, S.-Wei; Chung, Chien-Han

    2015-03-01

    A pico-projector-based optical sectioning microscope (POSM) was constructed using a pico-projector to generate structured illumination patterns. A net rate of 5.8 × 106 pixel/s and sub-micron spatial resolution in three-dimensions (3D) were achieved. Based on the pico-projector’s flexibility in pattern generation, the characteristics of POSM with different modulation periods and at different imaging depths were measured and discussed. With the application of different modulation periods, 3D chlorophyll fluorescence imaging of mesophyll cells was carried out in freshly plucked leaves of four species without sectioning or staining. For each leaf, an average penetration depth of 120 μm was achieved. Increasing the modulation period along with the increment of imaging depth, optical sectioning images can be obtained with a compromise between the axial resolution and signal-to-noise ratio. After ∼30 min imaging on the same area, photodamage was hardly observed. Taking the advantages of high speed and low damages of POSM, the investigation of the dynamic fluorescence responses to temperature changes was performed under three different treatment temperatures. The three embedded blue, green and red light-emitting diode light sources were applied to observe the responses of the leaves with different wavelength excitation.

  13. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Science.gov (United States)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  14. Image-based Virtual Exhibit and Its Extension to 3D

    Institute of Scientific and Technical Information of China (English)

    Ming-Min Zhang; Zhi-Geng Pan; Li-Feng Ren; Peng Wang

    2007-01-01

    In this paper we introduce an image-based virtual exhibition system especially for clothing product. It can provide a powerful material substitution function, which is very useful for customization clothing-built. A novel color substitution algorithm and two texture morphing methods are designed to ensure realistic substitution result. To extend it to 3D, we need to do the model reconstruction based on photos. Thus we present an improved method for modeling human body. It deforms a generic model with shape details extracted from pictures to generate a new model. Our method begins with model image generation followed by silhouette extraction and segmentation. Then it builds a mapping between pixels inside every pair of silhouette segments in the model image and in the picture. Our mapping algorithm is based on a slice space representation that conforms to the natural features of human body.

  15. A flexible new method for 3D measurement based on multi-view image sequences

    Science.gov (United States)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  16. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    Science.gov (United States)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    comtinue to perform 3D correlation imaging for the redisual gravity data. After several iterations, we can obtain a satisfactoy results. Newly developed general purpose computing technology from Nvidia GPU (Graphics Processing Unit) has been put into practice and received widespread attention in many areas. Based on the GPU programming mode and two parallel levels, five CPU loops for the main computation of 3D correlation imaging are converted into three loops in GPU kernel functions, thus achieving GPU/CPU collaborative computing. The two inner loops are defined as the dimensions of blocks and the three outer loops are defined as the dimensions of threads, thus realizing the double loop block calculation. Theoretical and real gravity data tests show that results are reliable and the computing time is greatly reduced. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  17. Skeletonization algorithm-based blood vessel quantification using in vivo 3D photoacoustic imaging

    Science.gov (United States)

    Meiburger, K. M.; Nam, S. Y.; Chung, E.; Suggs, L. J.; Emelianov, S. Y.; Molinari, F.

    2016-11-01

    Blood vessels are the only system to provide nutrients and oxygen to every part of the body. Many diseases can have significant effects on blood vessel formation, so that the vascular network can be a cue to assess malicious tumor and ischemic tissues. Various imaging techniques can visualize blood vessel structure, but their applications are often constrained by either expensive costs, contrast agents, ionizing radiations, or a combination of the above. Photoacoustic imaging combines the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging, and image contrast depends on optical absorption. This enables the detection of light absorbing chromophores such as hemoglobin with a greater penetration depth compared to purely optical techniques. We present here a skeletonization algorithm for vessel architectural analysis using non-invasive photoacoustic 3D images acquired without the administration of any exogenous contrast agents. 3D photoacoustic images were acquired on rats (n  =  4) in two different time points: before and after a burn surgery. A skeletonization technique based on the application of a vesselness filter and medial axis extraction is proposed to extract the vessel structure from the image data and six vascular parameters (number of vascular trees (NT), vascular density (VD), number of branches (NB), 2D distance metric (DM), inflection count metric (ICM), and sum of angles metric (SOAM)) were calculated from the skeleton. The parameters were compared (1) in locations with and without the burn wound on the same day and (2) in the same anatomic location before (control) and after the burn surgery. Four out of the six descriptors were statistically different (VD, NB, DM, ICM, p  <  0.05) when comparing two anatomic locations on the same day and when considering the same anatomic location at two separate times (i.e. before and after burn surgery). The study demonstrates an

  18. 3D nanostructure reconstruction based on the SEM imaging principle, and applications.

    Science.gov (United States)

    Zhu, Fu-Yun; Wang, Qi-Qi; Zhang, Xiao-Sheng; Hu, Wei; Zhao, Xin; Zhang, Hai-Xia

    2014-05-09

    This paper addresses a novel 3D reconstruction method for nanostructures based on the scanning electron microscopy (SEM) imaging principle. In this method, the shape from shading (SFS) technique is employed, to analyze the gray-scale information of a single top-view SEM image which contains all the visible surface information, and finally to reconstruct the 3D surface morphology. It offers not only unobstructed observation from various angles but also the exact physical dimensions of nanostructures. A convenient and commercially available tool (NanoViewer) is developed based on this method for nanostructure analysis and characterization of properties. The reconstruction result coincides well with the SEM nanostructure image and is verified in different ways. With the extracted structure information, subsequent research of the nanostructure can be carried out, such as roughness analysis, optimizing properties by structure improvement and performance simulation with a reconstruction model. Efficient, practical and non-destructive, the method will become a powerful tool for nanostructure surface observation and characterization.

  19. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    Science.gov (United States)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  20. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    Science.gov (United States)

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  1. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    Directory of Open Access Journals (Sweden)

    Zichun Zhong

    2016-01-01

    Full Text Available By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  2. Superresolution of 3-D computational integral imaging based on moving least square method.

    Science.gov (United States)

    Kim, Hyein; Lee, Sukho; Ryu, Taekyung; Yoon, Jungho

    2014-11-17

    In this paper, we propose an edge directive moving least square (ED-MLS) based superresolution method for computational integral imaging reconstruction(CIIR). Due to the low resolution of the elemental images and the alignment error of the microlenses, it is not easy to obtain an accurate registration result in integral imaging, which makes it difficult to apply superresolution to the CIIR application. To overcome this problem, we propose the edge directive moving least square (ED-MLS) based superresolution method which utilizes the properties of the moving least square. The proposed ED-MLS based superresolution takes the direction of the edge into account in the moving least square reconstruction to deal with the abrupt brightness changes in the edge regions, and is less sensitive to the registration error. Furthermore, we propose a framework which shows how the data have to be collected for the superresolution problem in the CIIR application. Experimental results verify that the resolution of the elemental images is enhanced, and that a high resolution reconstructed 3-D image can be obtained with the proposed method.

  3. A hyperspectral images compression algorithm based on 3D bit plane transform

    Science.gov (United States)

    Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue

    2010-10-01

    According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.

  4. Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.

    Science.gov (United States)

    Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H

    2015-11-01

    Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.

  5. Method for 3D Image Representation with Reducing the Number of Frames based on Characteristics of Human Eyes

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2016-10-01

    Full Text Available Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency. Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6. Also, it can represent the 3D depth through visual perceptions. Thus, real time volume rendering can be done with the proposed method.

  6. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  7. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing.

    Science.gov (United States)

    Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin

    2016-06-22

    A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  8. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  9. Statistical Inverse Ray Tracing for Image-Based 3D Modeling.

    Science.gov (United States)

    Liu, Shubao; Cooper, David B

    2014-10-01

    This paper proposes a new formulation and solution to image-based 3D modeling (aka "multi-view stereo") based on generative statistical modeling and inference. The proposed new approach, named statistical inverse ray tracing, models and estimates the occlusion relationship accurately through optimizing a physically sound image generation model based on volumetric ray tracing. Together with geometric priors, they are put together into a Bayesian formulation known as Markov random field (MRF) model. This MRF model is different from typical MRFs used in image analysis in the sense that the ray clique, which models the ray-tracing process, consists of thousands of random variables instead of two to dozens. To handle the computational challenges associated with large clique size, an algorithm with linear computational complexity is developed by exploiting, using dynamic programming, the recursive chain structure of the ray clique. We further demonstrate the benefit of exact modeling and accurate estimation of the occlusion relationship by evaluating the proposed algorithm on several challenging data sets.

  10. EDGE BASED 3D INDOOR CORRIDOR MODELING USING A SINGLE IMAGE

    Directory of Open Access Journals (Sweden)

    A. Baligh Jahromi

    2015-08-01

    Full Text Available Reconstruction of spatial layout of indoor scenes from a single image is inherently an ambiguous problem. However, indoor scenes are usually comprised of orthogonal planes. The regularity of planar configuration (scene layout is often recognizable, which provides valuable information for understanding the indoor scenes. Most of the current methods define the scene layout as a single cubic primitive. This domain-specific knowledge is often not valid in many indoors where multiple corridors are linked each other. In this paper, we aim to address this problem by hypothesizing-verifying multiple cubic primitives representing the indoor scene layout. This method utilizes middle-level perceptual organization, and relies on finding the ground-wall and ceiling-wall boundaries using detected line segments and the orthogonal vanishing points. A comprehensive interpretation of these edge relations is often hindered due to shadows and occlusions. To handle this problem, the proposed method introduces virtual rays which aid in the creation of a physically valid cubic structure by using orthogonal vanishing points. The straight line segments are extracted from the single image and the orthogonal vanishing points are estimated by employing the RANSAC approach. Many scene layout hypotheses are created through intersecting random line segments and virtual rays of vanishing points. The created hypotheses are evaluated by a geometric reasoning-based objective function to find the best fitting hypothesis to the image. The best model hypothesis offered with the highest score is then converted to a 3D model. The proposed method is fully automatic and no human intervention is necessary to obtain an approximate 3D reconstruction.

  11. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  12. 3D IMAGE BASED GEOMETRIC DOCUMENTATION OF THE TOWER OF WINDS

    Directory of Open Access Journals (Sweden)

    M. S. Tryfona

    2016-06-01

    Full Text Available This paper describes and investigates the implementation of almost entirely image based contemporary techniques for the three dimensional geometric documentation of the Tower of the Winds in Athens, which is a unique and very special monument of the Roman era. These techniques and related algorithms were implemented using a well-known piece of commercial software with extreme caution in the selection of the various parameters. Problems related to data acquisition and processing, but also to the algorithms and to the software implementation are identified and discussed. The resulting point cloud has been georeferenced, i.e. referenced to a local Cartesian coordinate system through minimum geodetic measurements, and subsequently the surface, i.e. the mesh was created and finally the three dimensional textured model was produced. In this way, the geometric documentation drawings, i.e. the horizontal section plans, the vertical section plans and the elevations, which include orthophotos of the monument, can be produced at will from that 3D model, for the complete geometric documentation. Finally, a 3D tour of the Tower of the Winds has also been created for a more integrated view of the monument. The results are presented and are evaluated for their completeness, efficiency, accuracy and ease of production.

  13. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    Science.gov (United States)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  14. Model-based segmentation and quantification of subcellular structures in 2D and 3D fluorescent microscopy images

    Science.gov (United States)

    Wörz, Stefan; Heinzer, Stephan; Weiss, Matthias; Rohr, Karl

    2008-03-01

    We introduce a model-based approach for segmenting and quantifying GFP-tagged subcellular structures of the Golgi apparatus in 2D and 3D microscopy images. The approach is based on 2D and 3D intensity models, which are directly fitted to an image within 2D circular or 3D spherical regions-of-interest (ROIs). We also propose automatic approaches for the detection of candidates, for the initialization of the model parameters, and for adapting the size of the ROI used for model fitting. Based on the fitting results, we determine statistical information about the spatial distribution and the total amount of intensity (fluorescence) of the subcellular structures. We demonstrate the applicability of our new approach based on 2D and 3D microscopy images.

  15. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  16. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  17. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  18. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Directory of Open Access Journals (Sweden)

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  19. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    Science.gov (United States)

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  20. REGION-BASED 3D SURFACE RECONSTRUCTION USING IMAGES ACQUIRED BY LOW-COST UNMANNED AERIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2015-08-01

    Full Text Available Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  1. Terahertz imaging system based on bessel beams via 3D printed axicons at 100GHz

    Science.gov (United States)

    Liu, Changming; Wei, Xuli; Zhang, Zhongqi; Wang, Kejia; Yang, Zhenggang; Liu, Jinsong

    2014-11-01

    Terahertz (THz) imaging technology shows great advantage in nondestructive detection (NDT), since many optical opaque materials are transparent to THz waves. In this paper, we design and fabricate dielectric axicons to generate zeroth order-Bessel beams by 3D printing technology. We further present an all-electric THz imaging system using the generated Bessel beams in 100GHz. Resolution targets made of printed circuit board are imaged, and the results clearly show the extended depth of focus of Bessel beam, indicating the promise of Bessel beam for the THz NDT.

  2. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2009-01-01

    Full Text Available Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed in this paper. This method divides both projection data and reconstructed image volume into subsets according to geometric symmetries in circular cone-beam projection layout, and a fast reconstruction for large data volume can be implemented by packing the subsets of projection data into the RGBA channels of GPU, performing the reconstruction chunk by chunk and combining the individual results in the end. The method is evaluated by reconstructing 3D images from computer-simulation data and real micro-CT data. Our results indicate that the GPU implementation can maintain original precision and speed up the reconstruction process by 110–120 times for circular cone-beam scan, as compared to traditional CPU implementation.

  3. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    Science.gov (United States)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  4. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    Science.gov (United States)

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms.

  5. Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine.

    Science.gov (United States)

    Lee, S; Lee, J; Lee, A; Park, N; Lee, S; Song, S; Seo, A; Lee, H; Kim, J-I; Eom, K

    2013-05-01

    Augmented reality (AR) is a technology which enables users to see the real world, with virtual objects superimposed upon or composited with it. AR simulators have been developed and used in human medicine, but not in veterinary medicine. The aim of this study was to develop an AR intravenous (IV) injection simulator to train veterinary and pre-veterinary students to perform canine venipuncture. Computed tomographic (CT) images of a beagle dog were scanned using a 64-channel multidetector. The CT images were transformed into volumetric data sets using an image segmentation method and were converted into a stereolithography format for creating 3D models. An AR-based interface was developed for an AR simulator for IV injection. Veterinary and pre-veterinary student volunteers were randomly assigned to an AR-trained group or a control group trained using more traditional methods (n = 20/group; n = 8 pre-veterinary students and n = 12 veterinary students in each group) and their proficiency at IV injection technique in live dogs was assessed after training was completed. Students were also asked to complete a questionnaire which was administered after using the simulator. The group that was trained using an AR simulator were more proficient at IV injection technique using real dogs than the control group (P ≤ 0.01). The students agreed that they learned the IV injection technique through the AR simulator. Although the system used in this study needs to be modified before it can be adopted for veterinary educational use, AR simulation has been shown to be a very effective tool for training medical personnel. Using the technology reported here, veterinary AR simulators could be developed for future use in veterinary education.

  6. Laser Based 3D Volumetric Display System

    Science.gov (United States)

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  7. Structured light field 3D imaging.

    Science.gov (United States)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-05

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

  8. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    Science.gov (United States)

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  9. 3D Myocardial Contraction Imaging Based on Dynamic Grid Interpolation: Theory and Simulation Analysis

    Science.gov (United States)

    Bu, Shuhui; Shiina, Tsuyoshi; Yamakawa, Makoto; Takizawa, Hotaka

    Accurate assessment of local myocardial contraction is important for diagnosis of ischemic heart disease, because decreases of myocardial motion often appear in the early stages of the disease. Three-dimensional (3-D) assessment of the stiffness distribution is required for accurate diagnosis of ischemic heart disease. Since myocardium motion occurs radially within the left ventricle wall and the ultrasound beam propagates axially, conventional approaches, such as tissue Doppler imaging and strain-rate imaging techniques, cannot provide us with enough quantitative information about local myocardial contraction. In order to resolve this problem, we propose a novel myocardial contraction imaging system which utilizes the weighted phase gradient method, the extended combined autocorrelation method, and the dynamic grid interpolation (DGI) method. From the simulation results, we conclude that the strain image's accuracy and contrast have been improved by the proposed method.

  10. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Thanh N., E-mail: thanh.tran@merck.com [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Nguyen, Thanh T.; Willemsz, Tofan A. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Pharmaceutical Sciences and Clinical Supplies, Merck MSD, PO Box 20, 5340 BH Oss (Netherlands); Kessel, Gijs van [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Frijlink, Henderik W. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Voort Maarschalk, Kees van der [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Competence Center Process Technology, Purac Biochem, Gorinchem (Netherlands)

    2012-05-06

    Highlights: Black-Right-Pointing-Pointer We revised the DBSCAN algorithm for segmentation and clustering of large 3D image dataset and classified multivariate image. Black-Right-Pointing-Pointer The algorithm takes into account the coordinate system of the image data to improve the computational performance. Black-Right-Pointing-Pointer The algorithm solved the instability problem in boundaries detection of the original DBSCAN. Black-Right-Pointing-Pointer The segmentation results were successfully validated with synthetic 3D image and 3D XMT image of a pharmaceutical powder. - Abstract: Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image.

  11. Computed Tomography Image Origin Identification based on Original Sensor Pattern Noise and 3D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2016-06-08

    In this paper, we focus on the "blind" identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-Scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT-Scanner based on an Original Sensor Pattern Noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its 3D image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train an SVM based classifier so as to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than Sensor Pattern Noise (SPN) based strategy proposed for general public camera devices.

  12. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  13. Study of CT-based positron range correction in high resolution 3D PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cal-Gonzalez, J., E-mail: jacobo@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  14. Segmentation and Recognition of Highway Assets using Image-based 3D Point Clouds and Semantic Texton Forests

    OpenAIRE

    Golparvar-Fard, Mani; Balali, Vahid; de la Garza, Jesus M.

    2013-01-01

    This dataset was collected as part of research work on segmentation and recognition of highway assets in images and viedo. The research is described in detail in Journal of Computing in Civil Engineering - ASCE paper "Segmentation and Recognition of Highway Assets using Image-based 3D Point Clouds and Semantic Texton Forests". The dataset include: 12 highway asset catgegories, 3 different dataset which is divided in three groups: (a)Ground Truth images with #_#_s_GT.jpg filename...

  15. Measurement of 3-D motion parameters of jacket launch based on stereo image sequences

    Institute of Scientific and Technical Information of China (English)

    HU Zhi-ping; OU Zong-ying; LIN Yan; LI Yun-feng

    2008-01-01

    To make sure that the process of jacket launch occurs in a semi-controlled manner, this paper deals with measurement of kinematic parameters of jacket launch using stereo vision and motion analysis. The system captured stereo image sequences by two separate CCD cameras, and then rebuilt 3D coordinates of the feature points to analyze the jacket launch motion. The possibility of combining stereo vision and motion analysis for measurement was examined. Results by experiments using scale model of jacket confirm the theoretical data.

  16. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    CERN Document Server

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  17. A LabVIEW based user-friendly nano-CT image alignment and 3D reconstruction platform

    CERN Document Server

    Wang, Shenghao; Wang, Zhili; Gao, Kun; Wu, Zhao; Zhu, Peiping; Wu, Ziyu

    2014-01-01

    X-ray nanometer computed tomography (nano-CT) offers applications and opportunities in many scientific researches and industrial areas. Here we present a user-friendly and fast LabVIEW based package running, after acquisition of the raw projection images, a procedure to obtain the inner structure of the sample under analysis. At first, a reliable image alignment procedure fixes possible misalignments among image series due to mechanical errors, thermal expansion and other external contributions, then a novel fast parallel beam 3D reconstruction performs the tomographic reconstruction. The remarkable improved reconstruction after the image calibration confirms the fundamental role of the image alignment procedure. It minimizes blurring and additional streaking artifacts present in a reconstructed slice that cause loss of information and faked structures in the observed material. The nano-CT image alignment and 3D reconstruction LabVIEW package significantly reducing the data process, makes faster and easier th...

  18. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    Science.gov (United States)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  19. Web-based interactive visualization of 3D video mosaics using X3D standard

    Institute of Scientific and Technical Information of China (English)

    CHON Jaechoon; LEE Yang-Won; SHIBASAKI Ryosuke

    2006-01-01

    We present a method of 3D image mosaicing for real 3D representation of roadside buildings, and implement a Web-based interactive visualization environment for the 3D video mosaics created by 3D image mosaicing. The 3D image mosaicing technique developed in our previous work is a very powerful method for creating textured 3D-GIS data without excessive data processing like the laser or stereo system. For the Web-based open access to the 3D video mosaics, we build an interactive visualization environment using X3D, the emerging standard of Web 3D. We conduct the data preprocessing for 3D video mosaics and the X3D modeling for textured 3D data. The data preprocessing includes the conversion of each frame of 3D video mosaics into concatenated image files that can be hyperlinked on the Web. The X3D modeling handles the representation of concatenated images using necessary X3D nodes. By employing X3D as the data format for 3D image mosaics, the real 3D representation of roadside buildings is extended to the Web and mobile service systems.

  20. Heat Equation to 3D Image Segmentation

    Directory of Open Access Journals (Sweden)

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  1. 3-D imaging and quantification of graupel porosity by synchrotron-based micro-tomography

    Directory of Open Access Journals (Sweden)

    F. Enzmann

    2011-10-01

    Full Text Available The air bubble structure is an important parameter to determine the radiation properties of graupel and hailstones. For 3-D imaging of this structure at micron resolution, a cryo-stage was developed. This stage was used at the tomography beamline of the Swiss Light Source (SLS synchrotron facility. The cryo-stage setup provides for the first time 3-D-data on the individual pore morphology of ice particles down to infrared wavelength resolution. In the present study, both sub-mm size natural and artificial ice particles rimed in a wind tunnel were investigated. In the natural rimed ice particles, Y-shaped air-filled closed pores were found. When kept for half an hour at −8 °C, this morphology transformed into smaller and more rounded voids well known from literature. Therefore, these round structures seem to represent an artificial rather than in situ pore structure, in contrast to the observed y-shaped structures found in the natural ice particles. Hence, for morphological studies on natural ice samples, special care must be taken to minimize any thermal cycling between sampling and measurement, with least artifact production at liquid nitrogen temperatures.

  2. 3D Curvelet-Based Segmentation and Quantification of Drusen in Optical Coherence Tomography Images

    Directory of Open Access Journals (Sweden)

    M. Esmaeili

    2017-01-01

    Full Text Available Spectral-Domain Optical Coherence Tomography (SD-OCT is a widely used interferometric diagnostic technique in ophthalmology that provides novel in vivo information of depth-resolved inner and outer retinal structures. This imaging modality can assist clinicians in monitoring the progression of Age-related Macular Degeneration (AMD by providing high-resolution visualization of drusen. Quantitative tools for assessing drusen volume that are indicative of AMD progression may lead to appropriate metrics for selecting treatment protocols. To address this need, a fully automated algorithm was developed to segment drusen area and volume from SD-OCT images. The proposed algorithm consists of three parts: (1 preprocessing, which includes creating binary mask and removing possible highly reflective posterior hyaloid that is used in accurate detection of inner segment/outer segment (IS/OS junction layer and Bruch’s membrane (BM retinal layers; (2 coarse segmentation, in which 3D curvelet transform and graph theory are employed to get the possible candidate drusenoid regions; (3 fine segmentation, in which morphological operators are used to remove falsely extracted elongated structures and get the refined segmentation results. The proposed method was evaluated in 20 publically available volumetric scans acquired by using Bioptigen spectral-domain ophthalmic imaging system. The average true positive and false positive volume fractions (TPVF and FPVF for the segmentation of drusenoid regions were found to be 89.15% ± 3.76 and 0.17% ± .18%, respectively.

  3. Cloud-Based Geospatial 3D Image Spaces—A Powerful Urban Model for the Smart City

    Directory of Open Access Journals (Sweden)

    Stephan Nebiker

    2015-10-01

    Full Text Available In this paper, we introduce the concept and an implementation of geospatial 3D image spaces as new type of native urban models. 3D image spaces are based on collections of georeferenced RGB-D imagery. This imagery is typically acquired using multi-view stereo mobile mapping systems capturing dense sequences of street level imagery. Ideally, image depth information is derived using dense image matching. This delivers a very dense depth representation and ensures the spatial and temporal coherence of radiometric and depth data. This results in a high-definition WYSIWYG (“what you see is what you get” urban model, which is intuitive to interpret and easy to interact with, and which provides powerful augmentation and 3D measuring capabilities. Furthermore, we present a scalable cloud-based framework for generating 3D image spaces of entire cities or states and a client architecture for their web-based exploitation. The model and the framework strongly support the smart city notion of efficiently connecting the urban environment and its processes with experts and citizens alike. In the paper we particularly investigate quality aspects of the urban model, namely the obtainable georeferencing accuracy and the quality of the depth map extraction. We show that our image-based georeferencing approach is capable of improving the original direct georeferencing accuracy by an order of magnitude and that the presented new multi-image matching approach is capable of providing high accuracies along with a significantly improved completeness of the depth maps.

  4. Refraction-based 2D, 2.5D and 3D medical imaging: Stepping forward to a clinical trial

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Masami [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan)], E-mail: msm-ando@rs.noda.tus.ac.jp; Bando, Hiroko [Tsukuba University (Japan); Tokiko, Endo; Ichihara, Shu [Nagoya Medical Center (Japan); Hashimoto, Eiko [GUAS (Japan); Hyodo, Kazuyuki [KEK (Japan); Kunisada, Toshiyuki [Okayama University (Japan); Li Gang [BSRF (China); Maksimenko, Anton [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan); KEK (Japan); Mori, Kensaku [Nagoya University (Japan); Shimao, Daisuke [IPU (Japan); Sugiyama, Hiroshi [KEK (Japan); Yuasa, Tetsuya [Yamagata University (Japan); Ueno, Ei [Tsukuba University (Japan)

    2008-12-15

    An attempt at refraction-based 2D, 2.5D and 3D X-ray imaging of articular cartilage and breast carcinoma is reported. We are developing very high contrast X-ray 2D imaging with XDFI (X-ray dark-field imaging), X-ray CT whose data are acquired by DEI (diffraction-enhanced imaging) and tomosynthesis due to refraction contrast. 2D and 2.5D images were taken with nuclear plates or with X-ray films. Microcalcification of breast cancer and articular cartilage are clearly visible. 3D data were taken with an X-ray sensitive CCD camera. The 3D image was successfully reconstructed by the use of an algorithm newly made by our group. This shows a distinctive internal structure of a ductus lactiferi (milk duct) that contains inner wall, intraductal carcinoma and multifocal calcification in the necrotic core of the continuous DCIS (ductal carcinoma in situ). Furthermore consideration of clinical applications of these contrasts made us to try tomosynthesis. This attempt was satisfactory from the view point of articular cartilage image quality and the skin radiation dose.

  5. DiAna, an ImageJ tool for object-based 3D co-localization and distance analysis

    OpenAIRE

    2016-01-01

    International audience; We present a new plugin for ImageJ called DiAna, for Distance Analysis, which comes with a user-friendly interface. DiAna proposes robust and accurate 3D segmentation for object extraction. The plugin performs automated object-based co-localization and distance analysis. DiAna offers an in-depth analysis of co-localization between objects and retrieves 3D measurements including co-localizing volumes and surfaces of contact. It also computes the distribution of distance...

  6. Correlation based 3-D segmentation of the left ventricle in pediatric echocardiographic images using radio-frequency data.

    Science.gov (United States)

    Nillesen, Maartje M; Lopata, Richard G P; Huisman, H J; Thijssen, Johan M; Kapusta, Livia; de Korte, Chris L

    2011-09-01

    Clinical diagnosis of heart disease might be substantially supported by automated segmentation of the endocardial surface in three-dimensional (3-D) echographic images. Because of the poor echogenicity contrast between blood and myocardial tissue in some regions and the inherent speckle noise, automated analysis of these images is challenging. A priori knowledge on the shape of the heart cannot always be relied on, e.g., in children with congenital heart disease, segmentation should be based on the echo features solely. The objective of this study was to investigate the merit of using temporal cross-correlation of radio-frequency (RF) data for automated segmentation of 3-D echocardiographic images. Maximum temporal cross-correlation (MCC) values were determined locally from the RF-data using an iterative 3-D technique. MCC values as well as a combination of MCC values and adaptive filtered, demodulated RF-data were used as an additional, external force in a deformable model approach to segment the endocardial surface and were tested against manually segmented surfaces. Results on 3-D full volume images (Philips, iE33) of 10 healthy children demonstrate that MCC values derived from the RF signal yield a useful parameter to distinguish between blood and myocardium in regions with low echogenicity contrast and incorporation of MCC improves the segmentation results significantly. Further investigation of the MCC over the whole cardiac cycle is required to exploit the full benefit of it for automated segmentation.

  7. Geometry-based vs. intensity-based medical image registration: A comparative study on 3D CT data.

    Science.gov (United States)

    Savva, Antonis D; Economopoulos, Theodore L; Matsopoulos, George K

    2016-02-01

    Spatial alignment of Computed Tomography (CT) data sets is often required in numerous medical applications and it is usually achieved by applying conventional exhaustive registration techniques, which are mainly based on the intensity of the subject data sets. Those techniques consider the full range of data points composing the data, thus negatively affecting the required processing time. Alternatively, alignment can be performed using the correspondence of extracted data points from both sets. Moreover, various geometrical characteristics of those data points can be used, instead of their chromatic properties, for uniquely characterizing each point, by forming a specific geometrical descriptor. This paper presents a comparative study reviewing variations of geometry-based, descriptor-oriented registration techniques, as well as conventional, exhaustive, intensity-based methods for aligning three-dimensional (3D) CT data pairs. In this context, three general image registration frameworks were examined: a geometry-based methodology featuring three distinct geometrical descriptors, an intensity-based methodology using three different similarity metrics, as well as the commonly used Iterative Closest Point algorithm. All techniques were applied on a total of thirty 3D CT data pairs with both known and unknown initial spatial differences. After an extensive qualitative and quantitative assessment, it was concluded that the proposed geometry-based registration framework performed similarly to the examined exhaustive registration techniques. In addition, geometry-based methods dramatically improved processing time over conventional exhaustive registration.

  8. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  9. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Directory of Open Access Journals (Sweden)

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  10. Dixon imaging-based partial volume correction improves quantification of choline detected by breast 3D-MRSI

    Energy Technology Data Exchange (ETDEWEB)

    Minarikova, Lenka; Gruber, Stephan; Bogner, Wolfgang; Trattnig, Siegfried; Chmelik, Marek [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, MR Center of Excellence, Vienna (Austria); Pinker-Domenig, Katja; Baltzer, Pascal A.T.; Helbich, Thomas H. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria)

    2014-09-14

    Our aim was to develop a partial volume (PV) correction method of choline (Cho) signals detected by breast 3D-magnetic resonance spectroscopic imaging (3D-MRSI), using information from water/fat-Dixon MRI. Following institutional review board approval, five breast cancer patients were measured at 3 T. 3D-MRSI (1 cm{sup 3} resolution, duration ∝11 min) and Dixon MRI (1 mm{sup 3}, ∝2 min) were measured in vivo and in phantoms. Glandular/lesion tissue was segmented from water/fat-Dixon MRI and transformed to match the resolution of 3D-MRSI. The resulting PV values were used to correct Cho signals. Our method was validated on a two-compartment phantom (choline/water and oil). PV values were correlated with the spectroscopic water signal. Cho signal variability, caused by partial-water/fat content, was tested in 3D-MRSI voxels located in/near malignant lesions. Phantom measurements showed good correlation (r = 0.99) with quantified 3D-MRSI water signals, and better homogeneity after correction. The dependence of the quantified Cho signal on the water/fat voxel composition was significantly (p < 0.05) reduced using Dixon MRI-based PV correction, compared to the original uncorrected data (1.60-fold to 3.12-fold) in patients. The proposed method allows quantification of the Cho signal in glandular/lesion tissue independent of water/fat composition in breast 3D-MRSI. This can improve the reproducibility of breast 3D-MRSI, particularly important for therapy monitoring. (orig.)

  11. Registration of 2D to 3D joint images using phase-based mutual information

    Science.gov (United States)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  12. [Hyperspectral image classification based on 3-D gabor filter and support vector machines].

    Science.gov (United States)

    Feng, Xiao; Xiao, Peng-feng; Li, Qi; Liu, Xiao-xi; Wu, Xiao-cui

    2014-08-01

    A three-dimensional Gabor filter was developed for classification of hyperspectral remote sensing image. This method is based on the characteristics of hyperspectral image and the principle of texture extraction with 2-D Gabor filters. Three-dimensional Gabor filter is able to filter all the bands of hyperspectral image simultaneously, capturing the specific responses in different scales, orientations, and spectral-dependent properties from enormous image information, which greatly reduces the time consumption in hyperspectral image texture extraction, and solve the overlay difficulties of filtered spectrums. Using the designed three-dimensional Gabor filters in different scales and orientations, Hyperion image which covers the typical area of Qi Lian Mountain was processed with full bands to get 26 Gabor texture features and the spatial differences of Gabor feature textures corresponding to each land types were analyzed. On the basis of automatic subspace separation, the dimensions of the hyperspectral image were reduced by band index (BI) method which provides different band combinations for classification in order to search for the optimal magnitude of dimension reduction. Adding three-dimensional Gabor texture features successively according to its discrimination to the given land types, supervised classification was carried out with the classifier support vector machines (SVM). It is shown that the method using three-dimensional Gabor texture features and BI band selection based on automatic subspace separation for hyperspectral image classification can not only reduce dimensions; but also improve the classification accuracy and efficiency of hyperspectral image.

  13. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    Energy Technology Data Exchange (ETDEWEB)

    Dhou, S; Hurwitz, M; Lewis, J [Brigham and Women' s Hospital, Dana-Farber Cancer Center, Harvard Medical School, Boston, MA (United States); Mishra, P [Varian Medical Systems, Palo Alto, CA (United States)

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT-based

  14. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Directory of Open Access Journals (Sweden)

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  15. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  16. Proposed NRC portable target case for short-range triangulation-based 3D imaging systems characterization

    Science.gov (United States)

    Carrier, Benjamin; MacKinnon, David; Cournoyer, Luc; Beraldin, J.-Angelo

    2011-03-01

    The National Research Council of Canada (NRC) is currently evaluating and designing artifacts and methods to completely characterize 3-D imaging systems. We have gathered a set of artifacts to form a low-cost portable case and provide a clearly-defined set of procedures for generating characteristic values using these artifacts. In its current version, this case is specifically designed for the characterization of short-range (standoff distance of 1 centimeter to 3 meters) triangulation-based 3-D imaging systems. The case is known as the "NRC Portable Target Case for Short-Range Triangulation-based 3-D Imaging Systems" (NRC-PTC). The artifacts in the case have been carefully chosen for their geometric, thermal, and optical properties. A set of characterization procedures are provided with these artifacts based on procedures either already in use or are based on knowledge acquired from various tests carried out by the NRC. Geometric dimensioning and tolerancing (GD&T), a well-known terminology in the industrial field, was used to define the set of tests. The following parameters of a system are characterized: dimensional properties, form properties, orientation properties, localization properties, profile properties, repeatability, intermediate precision, and reproducibility. A number of tests were performed in a special dimensional metrology laboratory to validate the capability of the NRC-PTC. The NRC-PTC will soon be subjected to reproducibility testing using an intercomparison evaluation to validate its use in different laboratories.

  17. 3D mapping of buried underworld infrastructure using dynamic Bayesian network based multi-sensory image data fusion

    Science.gov (United States)

    Dutta, Ritaban; Cohn, Anthony G.; Muggleton, Jen M.

    2013-05-01

    The successful operation of buried infrastructure within urban environments is fundamental to the conservation of modern living standards. In this paper a novel multi-sensor image fusion framework has been proposed and investigated using dynamic Bayesian network for automatic detection of buried underworld infrastructure. Experimental multi-sensors images were acquired for a known buried plastic water pipe using Vibro-acoustic sensor based location methods and Ground Penetrating Radar imaging system. Computationally intelligent conventional image processing techniques were used to process three types of sensory images. Independently extracted depth and location information from different images regarding the target pipe were fused together using dynamic Bayesian network to predict the maximum probable location and depth of the pipe. The outcome from this study was very encouraging as it was able to detect the target pipe with high accuracy compared with the currently existing pipe survey map. The approach was also applied successfully to produce a best probable 3D buried asset map.

  18. 3D near-infrared imaging based on a single-photon avalanche diode array sensor

    NARCIS (Netherlands)

    Mata Pavia, J.; Charbon, E.; Wolf, M.

    2011-01-01

    An imager for optical tomography was designed based on a detector with 128x128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in w

  19. Model-based 3D segmentation of the bones of joints in medical images

    Science.gov (United States)

    Liu, Jiamin; Udupa, Jayaram K.; Saha, Punam K.; Odhner, Dewey; Hirsch, Bruce E.; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A.

    2005-04-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of acquired images of the joint under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. A model-based strategy is proposed in this paper wherein a rigid model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. In other images of the joint, this model is used to search for the same bone by minimizing an energy functional that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations yielding true positive and false positive volume fractions in the range 89-97% and 0.2-0.7%. The method requires 1-2 minutes of operator time and 6-7 minutes of computer time, which makes it significantly more efficient than live wire - the only method currently available for the task.

  20. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    Science.gov (United States)

    Wang, Sheng-Hao; Zhang, Kai; Wang, Zhi-Li; Gao, Kun; Wu, Zhao; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly, and fast software package based on LabVIEW that may allow us to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion, and other external factors has been considered, together with a novel fast parallel beam 3D reconstruction procedure that was developed ad hoc to perform the tomographic reconstruction. We have obtained remarkably improved reconstruction results at the Beijing Synchrotron Radiation Facility after the image calibration, the fundamental role of this image alignment procedure was confirmed, which minimizes the unwanted blurs and additional streaking artifacts that are always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure are fully based on LabVIEW routines, significantly reducing the data post-processing cycle, thus making the activity of the users faster and easier during experimental runs.

  1. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    Science.gov (United States)

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.

  2. A new navigation approach of terrain contour matching based on 3-D terrain reconstruction from onboard image sequence

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This article presents a passive navigation method of terrain contour matching by reconstructing the 3-D terrain from the image sequence(acquired by the onboard camera).To achieve automation and simultaneity of the image sequence processing for navigation,a correspondence registration method based on control points tracking is proposed which tracks the sparse control points through the whole image sequence and uses them as correspondence in the relation geometry solution.Besides,a key frame selection method based on the images overlapping ratio and intersecting angles is explored,thereafter the requirement for the camera system configuration is provided.The proposed method also includes an optimal local homography estimating algorithm according to the control points,which helps correctly predict points to be matched and their speed corresponding.Consequently,the real-time 3-D terrain of the trajectory thus reconstructed is matched with the referenced terrain map,and the result of which provides navigating information.The digital simulation experiment and the real image based experiment have verified the proposed method.

  3. A Chaos-based Image Encryption Scheme Using 3D Skew Tent Map and Coupled Map Lattice

    Directory of Open Access Journals (Sweden)

    Ruisong Ye

    2012-02-01

    Full Text Available This paper proposes a chaos-based image encryption scheme where one 3D skew tent map with three control parameters is utilized to generate chaotic orbits applied to scramble the pixel positions while one coupled map lattice is employed to yield random gray value sequences to change the gray values so as to enhance the security. Experimental results have been carried out with detailed analysis to demonstrate that the proposed image encryption scheme possesses large key space to resist brute-force attack and possesses good statistical properties to frustrate statistical analysis attacks. Experiments are also performed to illustrate the robustness against malicious attacks like cropping, noising, JPEG compression.

  4. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    Energy Technology Data Exchange (ETDEWEB)

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K. [Univ. of Nebraska Medical Center, Omaha, NE (United States)

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  5. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  6. 3D Backscatter Imaging System

    Science.gov (United States)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  7. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    There is an increasing interest in using 3D computer vision in precision agriculture. This calls for better quantitative evaluation and understanding of computer vision methods. This paper proposes a test framework using ray traced crop scenes that allows in-depth analysis of algorithm performance...

  8. Fast algorithm for optimal graph-Laplacian based 3D image segmentation

    Science.gov (United States)

    Harizanov, S.; Georgiev, I.

    2016-10-01

    In this paper we propose an iterative steepest-descent-type algorithm that is observed to converge towards the exact solution of the ℓ0 discrete optimization problem, related to graph-Laplacian based image segmentation. Such an algorithm allows for significant additional improvements on the segmentation quality once the minimizer of the associated relaxed ℓ1 continuous optimization problem is computed, unlike the standard strategy of simply hard-thresholding the latter. Convergence analysis of the algorithm is not a subject of this work. Instead, various numerical experiments, confirming the practical value of the algorithm, are documented.

  9. 3D Assessment of Mandibular Growth Based on Image Registration: A Feasibility Study in a Rabbit Model

    Directory of Open Access Journals (Sweden)

    I. Kim

    2014-01-01

    Full Text Available Background. Our knowledge of mandibular growth mostly derives from cephalometric radiography, which has inherent limitations due to the two-dimensional (2D nature of measurement. Objective. To assess 3D morphological changes occurring during growth in a rabbit mandible. Methods. Serial cone-beam computerised tomographic (CBCT images were made of two New Zealand white rabbits, at baseline and eight weeks after surgical implantation of 1 mm diameter metallic spheres as fiducial markers. A third animal acted as an unoperated (no implant control. CBCT images were segmented and registered in 3D (Implant Superimposition and Procrustes Method, and the remodelling pattern described used color maps. Registration accuracy was quantified by the maximal of the mean minimum distances and by the Hausdorff distance. Results. The mean error for image registration was 0.37 mm and never exceeded 1 mm. The implant-based superimposition showed most remodelling occurred at the mandibular ramus, with bone apposition posteriorly and vertical growth at the condyle. Conclusion. We propose a method to quantitatively describe bone remodelling in three dimensions, based on the use of bone implants as fiducial markers and CBCT as imaging modality. The method is feasible and represents a promising approach for experimental studies by comparing baseline growth patterns and testing the effects of growth-modification treatments.

  10. Novel methodology for 3D reconstruction of carotid arteries and plaque characterization based upon magnetic resonance imaging carotid angiography data.

    Science.gov (United States)

    Sakellarios, Antonis I; Stefanou, Kostas; Siogkas, Panagiotis; Tsakanikas, Vasilis D; Bourantas, Christos V; Athanasiou, Lambros; Exarchos, Themis P; Fotiou, Evangelos; Naka, Katerina K; Papafaklis, Michail I; Patterson, Andrew J; Young, Victoria E L; Gillard, Jonathan H; Michalis, Lampros K; Fotiadis, Dimitrios I

    2012-10-01

    In this study, we present a novel methodology that allows reliable segmentation of the magnetic resonance images (MRIs) for accurate fully automated three-dimensional (3D) reconstruction of the carotid arteries and semiautomated characterization of plaque type. Our approach uses active contours to detect the luminal borders in the time-of-flight images and the outer vessel wall borders in the T(1)-weighted images. The methodology incorporates the connecting components theory for the automated identification of the bifurcation region and a knowledge-based algorithm for the accurate characterization of the plaque components. The proposed segmentation method was validated in randomly selected MRI frames analyzed offline by two expert observers. The interobserver variability of the method for the lumen and outer vessel wall was -1.60%±6.70% and 0.56%±6.28%, respectively, while the Williams Index for all metrics was close to unity. The methodology implemented to identify the composition of the plaque was also validated in 591 images acquired from 24 patients. The obtained Cohen's k was 0.68 (0.60-0.76) for lipid plaques, while the time needed to process an MRI sequence for 3D reconstruction was only 30 s. The obtained results indicate that the proposed methodology allows reliable and automated detection of the luminal and vessel wall borders and fast and accurate characterization of plaque type in carotid MRI sequences. These features render the currently presented methodology a useful tool in the clinical and research arena.

  11. 3D imaging in forensic odontology.

    Science.gov (United States)

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  12. Multiplane 3D superresolution optical fluctuation imaging

    CERN Document Server

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  13. Graph-cut Based Interactive Segmentation of 3D Materials-Science Images

    Science.gov (United States)

    2014-04-26

    problems in computational vision. J. Am. Stat. Assoc. 76–89 (1987) 31. Martin , D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural...Medical Image Computing and Computer-Assisted Intervention vol. 6893, pp. 603–610 (2011) 54. Unger, M., Pock, T., Trobin, W., Cremers , D., Bischof, H

  14. Commissioning of a 3D image-based treatment planning system for high-dose-rate brachytherapy of cervical cancer.

    Science.gov (United States)

    Kim, Yongbok; Modrick, Joseph M; Pennington, Edward C; Kim, Yusung

    2016-03-08

    The objective of this work is to present commissioning procedures to clinically implement a three-dimensional (3D), image-based, treatment-planning system (TPS) for high-dose-rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8-1.0 mm on MRI when compared with X-rays. In-house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose-volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image-based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End-to-end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image-based TPS for HDR

  15. 3D integral imaging with optical processing

    Science.gov (United States)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  16. Micro-computed tomography image-based evaluation of 3D anisotropy degree of polymer scaffolds.

    Science.gov (United States)

    Pérez-Ramírez, Ursula; López-Orive, Jesús Javier; Arana, Estanislao; Salmerón-Sánchez, Manuel; Moratal, David

    2015-01-01

    Anisotropy is one of the most meaningful determinants of biomechanical behaviour. This study employs micro-computed tomography (μCT) and image techniques for analysing the anisotropy of regenerative medicine polymer scaffolds. For this purpose, three three-dimensional anisotropy evaluation image methods were used: ellipsoid of inertia (EI), mean intercept length (MIL) and tensor scale (t-scale). These were applied to three patterns (a sphere, a cube and a right prism) and to two polymer scaffold topologies (cylindrical orthogonal pore mesh and spherical pores). For the patterns, the three methods provided good results. Regarding the scaffolds, EI mistook both topologies (0.0158, [-0.5683; 0.6001]; mean difference and 95% confidence interval), and MIL showed no significant differences (0.3509, [0.0656; 0.6362]). T-scale is the preferable method because it gave the best capability (0.3441, [0.1779; 0.5102]) to differentiate both topologies. This methodology results in the development of non-destructive tools to engineer biomimetic scaffolds, incorporating anisotropy as a fundamental property to be mimicked from the original tissue and permitting its assessment by means of μCT image analysis.

  17. SEE-THROUGH IMAGING OF LASER-SCANNED 3D CULTURAL HERITAGE OBJECTS BASED ON STOCHASTIC RENDERING OF LARGE-SCALE POINT CLOUDS

    OpenAIRE

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; R. Umegaki; Wang, S; M. Uemura(Hiroshima Astrophysical Science Center, Hiroshima University); Okamoto, A; Koyamada, K.

    2016-01-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminatin...

  18. Numerical validation framework for micromechanical simulations based on synchrotron 3D imaging

    Science.gov (United States)

    Buljac, Ante; Shakoor, Modesar; Neggers, Jan; Bernacki, Marc; Bouchard, Pierre-Olivier; Helfen, Lukas; Morgeneyer, Thilo F.; Hild, François

    2017-03-01

    A combined computational-experimental framework is introduced herein to validate numerical simulations at the microscopic scale. It is exemplified for a flat specimen with central hole made of cast iron and imaged via in-situ synchrotron laminography at micrometer resolution during a tensile test. The region of interest in the reconstructed volume, which is close to the central hole, is analyzed by digital volume correlation (DVC) to measure kinematic fields. Finite element (FE) simulations, which account for the studied material microstructure, are driven by Dirichlet boundary conditions extracted from DVC measurements. Gray level residuals for DVC measurements and FE simulations are assessed for validation purposes.

  19. Numerical validation framework for micromechanical simulations based on synchrotron 3D imaging

    Science.gov (United States)

    Buljac, Ante; Shakoor, Modesar; Neggers, Jan; Bernacki, Marc; Bouchard, Pierre-Olivier; Helfen, Lukas; Morgeneyer, Thilo F.; Hild, François

    2016-11-01

    A combined computational-experimental framework is introduced herein to validate numerical simulations at the microscopic scale. It is exemplified for a flat specimen with central hole made of cast iron and imaged via in-situ synchrotron laminography at micrometer resolution during a tensile test. The region of interest in the reconstructed volume, which is close to the central hole, is analyzed by digital volume correlation (DVC) to measure kinematic fields. Finite element (FE) simulations, which account for the studied material microstructure, are driven by Dirichlet boundary conditions extracted from DVC measurements. Gray level residuals for DVC measurements and FE simulations are assessed for validation purposes.

  20. Recommendations from gynaecological (GYN) GEC ESTRO working group (II): concepts and terms in 3D image-based treatment planning in cervix cancer brachytherapy-3D dose volume parameters and aspects of 3D image-based anatomy, radiation physics, radiobiology.

    Science.gov (United States)

    Pötter, Richard; Haie-Meder, Christine; Van Limbergen, Erik; Barillot, Isabelle; De Brabandere, Marisol; Dimopoulos, Johannes; Dumas, Isabelle; Erickson, Beth; Lang, Stefan; Nulens, An; Petrow, Peter; Rownd, Jason; Kirisits, Christian

    2006-01-01

    The second part of the GYN GEC ESTRO working group recommendations is focused on 3D dose-volume parameters for brachytherapy of cervical carcinoma. Methods and parameters have been developed and validated from dosimetric, imaging and clinical experience from different institutions (University of Vienna, IGR Paris, University of Leuven). Cumulative dose volume histograms (DVH) are recommended for evaluation of the complex dose heterogeneity. DVH parameters for GTV, HR CTV and IR CTV are the minimum dose delivered to 90 and 100% of the respective volume: D90, D100. The volume, which is enclosed by 150 or 200% of the prescribed dose (V150, V200), is recommended for overall assessment of high dose volumes. V100 is recommended for quality assessment only within a given treatment schedule. For Organs at Risk (OAR) the minimum dose in the most irradiated tissue volume is recommended for reporting: 0.1, 1, and 2 cm3; optional 5 and 10 cm3. Underlying assumptions are: full dose of external beam therapy in the volume of interest, identical location during fractionated brachytherapy, contiguous volumes and contouring of organ walls for >2 cm3. Dose values are reported as absorbed dose and also taking into account different dose rates. The linear-quadratic radiobiological model-equivalent dose (EQD2)-is applied for brachytherapy and is also used for calculating dose from external beam therapy. This formalism allows systematic assessment within one patient, one centre and comparison between different centres with analysis of dose volume relations for GTV, CTV, and OAR. Recommendations for the transition period from traditional to 3D image-based cervix cancer brachytherapy are formulated. Supplementary data (available in the electronic version of this paper) deals with aspects of 3D imaging, radiation physics, radiation biology, dose at reference points and dimensions and volumes for the GTV and CTV (adding to [Haie-Meder C, Pötter R, Van Limbergen E et al. Recommendations from

  1. Intersection-based registration of slice stacks to form 3D images of the human fetal brain

    DEFF Research Database (Denmark)

    Kim, Kio; Hansen, Mads Fogtmann; Habas, Piotr;

    2008-01-01

    Clinical fetal MR imaging of the brain commonly makes use of fast 2D acquisitions of multiple sets of approximately orthogonal 2D slices. We and others have previously proposed an iterative slice-to-volume registration process to recover a geometrically consistent 3D image. However, these approac...

  2. Dose optimization in gynecological 3D image based interstitial brachytherapy using martinez universal perineal interstitial template (MUPIT -an institutional experience

    Directory of Open Access Journals (Sweden)

    Pramod Kumar Sharma

    2014-01-01

    Full Text Available The aim of this study was to evaluate the dose optimization in 3D image based gynecological interstitial brachytherapy using Martinez Universal Perineal Interstitial Template (MUPIT. Axial CT image data set of 20 patients of gynecological cancer who underwent external radiotherapy and high dose rate (HDR interstitial brachytherapy using MUPIT was employed to delineate clinical target volume (CTV and organs at risk (OARs. Geometrical and graphical optimization were done for optimum CTV coverage and sparing of OARs. Coverage Index (CI, dose homogeneity index (DHI, overdose index (OI, dose non-uniformity ratio (DNR, external volume index (EI, conformity index (COIN and dose volume parameters recommended by GEC-ESTRO were evaluated. The mean CTV, bladder and rectum volume were 137 ± 47cc, 106 ± 41cc and 50 ± 25cc, respectively. Mean CI, DHI and DNR were 0.86 ± 0.03, 0.69 ± 0.11 and 0.31 ± 0.09, while the mean OI, EI, and COIN were 0.08 ± 0.03, 0.07 ± 0.05 and 0.79 ± 0.05, respectively. The estimated mean CTV D90 was 76 ± 11Gy and D100 was 63 ± 9Gy. The different dosimetric parameters of bladder D2cc, D1cc and D0.1cc were 76 ± 11Gy, 81 ± 14Gy, and 98 ± 21Gy and of rectum/recto-sigmoid were 80 ± 17Gy, 85 ± 13Gy, and 124 ± 37Gy, respectively. Dose optimization yields superior coverage with optimal values of indices. Emerging data on 3D image based brachytherapy with reporting and clinical correlation of DVH parameters outcome is enterprizing and provides definite assistance in improving the quality of brachytherapy implants. DVH parameter for urethra in gynecological implants needs to be defined further.

  3. A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Zhu, Xinjun; Chen, Xia; Su, Yonggang; Cai, Yuanxue

    2016-11-01

    The orthogonal fringe projection technique has as wide as long practical application nowadays. In this paper, we propose a 3D shape retrieval method for orthogonal composite fringe projection based on a combination of variational image decomposition (VID) and variational mode decomposition (VMD). We propose a new image decomposition model to extract the orthogonal fringe. Then we introduce the VMD method to separate the horizontal and vertical fringe from the orthogonal fringe. Lastly, the 3D shape information is obtained by the differential 3D shape retrieval method (D3D). We test the proposed method on a simulated pattern and two actual objects with edges or abrupt changes in height, and compare with the recent, related and advanced differential 3D shape retrieval method (D3D) in terms of both quantitative evaluation and visual quality. The experimental results have demonstrated the validity of the proposed method.

  4. Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer

    Science.gov (United States)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2013-03-01

    In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.

  5. Full-field wing deformation measurement scheme for in-flight cantilever monoplane based on 3D digital image correlation

    Science.gov (United States)

    Li, Lei-Gang; Liang, Jin; Guo, Xiang; Guo, Cheng; Hu, Hao; Tang, Zheng-Zong

    2014-06-01

    In this paper, a new non-contact scheme, based on 3D digital image correlation technology, is presented to measure the full-field wing deformation of in-flight cantilever monoplanes. Because of the special structure of the cantilever wing, two conjugated camera groups, which are rigidly connected and calibrated to an ensemble respectively, are installed onto the vertical fin of the aircraft and record the whole measurement. First, a type of pre-stretched target and speckle pattern are designed to adapt the oblique camera view for accurate detection and correlation. Then, because the measurement cameras are swinging with the aircraft vertical trail all the time, a camera position self-correction method (using control targets sprayed on the back of the aircraft), is designed to orientate all the cameras’ exterior parameters to a unified coordinate system in real time. Besides, for the excessively inclined camera axis and the vertical camera arrangement, a weak correlation between the high position image and low position image occurs. In this paper, a new dual-temporal efficient matching method, combining the principle of seed point spreading, is proposed to achieve the matching of weak correlated images. A novel system is developed and a simulation test in the laboratory was carried out to verify the proposed scheme.

  6. Image-Based 3D Modeling as a Documentation Method for Zooarchaeological Remains in Waste-Related Contexts

    Directory of Open Access Journals (Sweden)

    Stella Macheridis

    2015-12-01

    Full Text Available During the last twenty years archaeology has experienced a technological revolution that spans scientific achievements and day-to-day practices. The tools and methods from this digital change have also strongly impacted archaeology. Image-based 3D modeling is becoming more common when documenting archaeological features but is still not implemented as standard in field excavation projects. When it comes to integrating zooarchaeological perspectives in the interpretational process in the field, this type of documentation is a powerful tool, especially regarding visualization related to reconstruction and resolution. Also, with the implementation of image-based 3D modeling, the use of digital documentation in the field has been proven to be time- and cost effective (e.g., De Reu et al. 2014; De Reu et al. 2013; Dellepiane et al. 2013; Verhoeven et al. 2012. Few studies have been published on the digital documentation of faunal remains in archaeological contexts. As a case study, the excavation of the infill of a clay bin from building 102 in the Neolithic settlement of Ҫatalhöyük is presented. Alongside traditional documentation, infill was photographed in sequence at each second centimeter of soil removal. The photographs were processed with Agisoft Photoscan. Seven models were made, enabling reconstruction of the excavation of this context. This technique can be a powerful documentation tool, including recording notes of zooarchaeological significance, such as markers of taphonomic processes. An important methodological advantage in this regard is the potential to measure bones in situ in for analysis after excavation.

  7. 3D nanoscale imaging of biological samples with laboratory-based soft X-ray sources

    Science.gov (United States)

    Dehlinger, Aurélie; Blechschmidt, Anne; Grötzsch, Daniel; Jung, Robert; Kanngießer, Birgit; Seim, Christian; Stiel, Holger

    2015-09-01

    In microscopy, where the theoretical resolution limit depends on the wavelength of the probing light, radiation in the soft X-ray regime can be used to analyze samples that cannot be resolved with visible light microscopes. In the case of soft X-ray microscopy in the water-window, the energy range of the radiation lies between the absorption edges of carbon (at 284 eV, 4.36 nm) and oxygen (543 eV, 2.34 nm). As a result, carbon-based structures, such as biological samples, posses a strong absorption, whereas e.g. water is more transparent to this radiation. Microscopy in the water-window, therefore, allows the structural investigation of aqueous samples with resolutions of a few tens of nanometers and a penetration depth of up to 10μm. The development of highly brilliant laser-produced plasma-sources has enabled the transfer of Xray microscopy, that was formerly bound to synchrotron sources, to the laboratory, which opens the access of this method to a broader scientific community. The Laboratory Transmission X-ray Microscope at the Berlin Laboratory for innovative X-ray technologies (BLiX) runs with a laser produced nitrogen plasma that emits radiation in the soft X-ray regime. The mentioned high penetration depth can be exploited to analyze biological samples in their natural state and with several projection angles. The obtained tomogram is the key to a more precise and global analysis of samples originating from various fields of life science.

  8. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Science.gov (United States)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  9. Micromachined Ultrasonic Transducers for 3-D Imaging

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... of state-of-the-art 3-D ultrasound systems. The focus is on row-column addressed transducer arrays. This previously sparsely investigated addressing scheme offers a highly reduced number of transducer elements, resulting in reduced transducer manufacturing costs and data processing. To produce...

  10. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

    CERN Document Server

    Li, Ruijiang; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-01-01

    Purpose: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Methods: Given a set of volumetric images of a patient at N breathing phases as the training data, we perform deformable image registration between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, we can generate new DVFs, which, when applied on the reference image, lead to new volumetric images. We then can reconstruct a volumetric image from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. Our algorithm was implemented on graphics processing units...

  11. Volumetric label-free imaging and 3D reconstruction of mammalian cochlea based on two-photon excitation fluorescence microscopy

    Science.gov (United States)

    Zhang, Xianzeng; Geng, Yang; Ye, Qing; Zhan, Zhenlin; Xie, Shusen

    2013-11-01

    The visualization of the delicate structure and spatial relationship of intracochlear sensory cells has relied on the laborious procedures of tissue excision, fixation, sectioning and staining for light and electron microscopy. Confocal microscopy is advantageous for its high resolution and deep penetration depth, yet disadvantageous due to the necessity of exogenous labeling. In this study, we present the volumetric imaging of rat cochlea without exogenous dyes using a near-infrared femtosecond laser as the excitation mechanism and endogenous two-photon excitation fluorescence (TPEF) as the contrast mechanism. We find that TPEF exhibits strong contrast, allowing cellular and even subcellular resolution imaging of the cochlea, differentiating cell types, visualizing delicate structures and the radial nerve fiber. Our results further demonstrate that 3D reconstruction rendered with z-stacks of optical sections enables better revealment of fine structures and spatial relationships, and easily performed morphometric analysis. The TPEF-based optical biopsy technique provides great potential for new and sensitive diagnostic tools for hearing loss or hearing disorders, especially when combined with fiber-based microendoscopy.

  12. Highway 3D model from image and lidar data

    Science.gov (United States)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  13. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  14. Compression of 3D integral images using wavelet decomposition

    Science.gov (United States)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  15. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  16. 3D Shape Indexing and Retrieval Using Characteristics level images

    Directory of Open Access Journals (Sweden)

    Abdelghni Lakehal

    2012-05-01

    Full Text Available In this paper, we propose an improved version of the descriptor that we proposed before. The descriptor is based on a set of binary images extracted from the 3D model called level images noted LI. The set LI is often bulky, why we introduced the X-means technique to reduce its size instead of K-means used in the old version. A 2D binary image descriptor was introduced to extract the vectors descriptors of the 3D model. For a comparative study of two versions of the descriptor, we used the National Taiwan University (NTU database of 3D object.

  17. 3D laser imaging for concealed object identification

    Science.gov (United States)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  18. 3-D Velocity Model of the Coachella Valley, Southern California Based on Explosive Shots from the Salton Seismic Imaging Project

    Science.gov (United States)

    Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2014-12-01

    We have analyzed explosive shot data from the 2011 Salton Seismic Imaging Project (SSIP) across a 2-D seismic array and 5 profiles in the Coachella Valley to produce a 3-D P-wave velocity model that will be used in calculations of strong ground shaking. Accurate maps of seismicity and active faults rely both on detailed geological field mapping and a suitable velocity model to accurately locate earthquakes. Adjoint tomography of an older version of the SCEC 3-D velocity model shows that crustal heterogeneities strongly influence seismic wave propagation from moderate earthquakes (Tape et al., 2010). These authors improve the crustal model and subsequently simulate the details of ground motion at periods of 2 s and longer for hundreds of ray paths. Even with improvements such as the above, the current SCEC velocity model for the Salton Trough does not provide a match of the timing or waveforms of the horizontal S-wave motions, which Wei et al. (2013) interpret as caused by inaccuracies in the shallow velocity structure. They effectively demonstrate that the inclusion of shallow basin structure improves the fit in both travel times and waveforms. Our velocity model benefits from the inclusion of known location and times of a subset of 126 shots detonated over a 3-week period during the SSIP. This results in an improved velocity model particularly in the shallow crust. In addition, one of the main challenges in developing 3-D velocity models is an uneven stations-source distribution. To better overcome this challenge, we also include the first arrival times of the SSIP shots at the more widely spaced Southern California Seismic Network (SCSN) in our inversion, since the layout of the SSIP is complementary to the SCSN. References: Tape, C., et al., 2010, Seismic tomography of the Southern California crust based on spectral-element and adjoint methods: Geophysical Journal International, v. 180, no. 1, p. 433-462. Wei, S., et al., 2013, Complementary slip distributions

  19. High resolution 3-D wavelength diversity imaging

    Science.gov (United States)

    Farhat, N. H.

    1981-09-01

    A physical optics, vector formulation of microwave imaging of perfectly conducting objects by wavelength and polarization diversity is presented. The results provide the theoretical basis for optimal data acquisition and three-dimensional tomographic image retrieval procedures. These include: (a) the selection of highly thinned (sparse) receiving array arrangements capable of collecting large amounts of information about remote scattering objects in a cost effective manner and (b) techniques for 3-D tomographic image reconstruction and display in which polarization diversity data is fully accounted for. Data acquisition employing a highly attractive AMTDR (Amplitude Modulated Target Derived Reference) technique is discussed and demonstrated by computer simulation. Equipment configuration for the implementation of the AMTDR technique is also given together with a measurement configuration for the implementation of wavelength diversity imaging in a roof experiment aimed at imaging a passing aircraft. Extension of the theory presented to 3-D tomographic imaging of passive noise emitting objects by spectrally selective far field cross-correlation measurements is also given. Finally several refinements made in our anechoic-chamber measurement system are shown to yield drastic improvement in performance and retrieved image quality.

  20. Visual grading of 2D and 3D functional MRI compared with image-based descriptive measures

    Energy Technology Data Exchange (ETDEWEB)

    Ragnehed, Mattias [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Department of Medical and Health Sciences, Division of Radiological Sciences/Radiology, Faculty of Health Sciences, Linkoeping (Sweden); Leinhard, Olof Dahlqvist; Pihlsgaard, Johan; Lundberg, Peter [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Division of Radiological Sciences, Radiation Physics, IMH, Linkoeping (Sweden); Wirell, Staffan [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Soekjer, Hannibal; Faegerstam, Patrik [Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Jiang, Bo [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Smedby, Oerjan; Engstroem, Maria [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden)

    2010-03-15

    A prerequisite for successful clinical use of functional magnetic resonance imaging (fMRI) is the selection of an appropriate imaging sequence. The aim of this study was to compare 2D and 3D fMRI sequences using different image quality assessment methods. Descriptive image measures, such as activation volume and temporal signal-to-noise ratio (TSNR), were compared with results from visual grading characteristics (VGC) analysis of the fMRI results. Significant differences in activation volume and TSNR were not directly reflected by differences in VGC scores. The results suggest that better performance on descriptive image measures is not always an indicator of improved diagnostic quality of the fMRI results. In addition to descriptive image measures, it is important to include measures of diagnostic quality when comparing different fMRI data acquisition methods. (orig.)

  1. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    Science.gov (United States)

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  2. Neural Network Based 3D Surface Reconstruction

    Directory of Open Access Journals (Sweden)

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  3. Contrast Enhancement Method Based on Gray and Its Distance Double-Weighting Histogram Equalization for 3D CT Images of PCBs

    Directory of Open Access Journals (Sweden)

    Lei Zeng

    2016-01-01

    Full Text Available Cone beam computed tomography (CBCT is a new detection method for 3D nondestructive testing of printed circuit boards (PCBs. However, the obtained 3D image of PCBs exhibits low contrast because of several factors, such as the occurrence of metal artifacts and beam hardening, during the process of CBCT imaging. Histogram equalization (HE algorithms cannot effectively extend the gray difference between a substrate and a metal in 3D CT images of PCBs, and the reinforcing effects are insignificant. To address this shortcoming, this study proposes an image enhancement algorithm based on gray and its distance double-weighting HE. Considering the characteristics of 3D CT images of PCBs, the proposed algorithm uses gray and its distance double-weighting strategy to change the form of the original image histogram distribution, suppresses the grayscale of a nonmetallic substrate, and expands the grayscale of wires and other metals. The proposed algorithm also enhances the gray difference between a substrate and a metal and highlights metallic materials. The proposed algorithm can enhance the gray value of wires and other metals in 3D CT images of PCBs. It applies enhancement strategies of changing gray and its distance double-weighting mechanism to adapt to this particular purpose. The flexibility and advantages of the proposed algorithm are confirmed by analyses and experimental results.

  4. 3D quantitative phase imaging of neural networks using WDT

    Science.gov (United States)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  5. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging.

    Science.gov (United States)

    Tseng, Hubert; Gage, Jacob A; Shen, Tsaiwei; Haisler, William L; Neeley, Shane K; Shiao, Sue; Chen, Jianbo; Desai, Pujan K; Liao, Angela; Hebel, Chris; Raphael, Robert M; Becker, Jeanne L; Souza, Glauco R

    2015-09-14

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5'-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (-control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z' = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments.

  6. Validation Tests of Open-Source Procedures for Digital Camera Calibration and 3d Image-Based Modelling

    Science.gov (United States)

    Toschi, I.; Rivola, R.; Bertacchini, E.; Castagnetti, C.; Dubbini, M.; Capra, A.

    2013-07-01

    Among the many open-source software solutions recently developed for the extraction of point clouds from a set of un-oriented images, the photogrammetric tools Apero and MicMac (IGN, Institut Géographique National) aim to distinguish themselves by focusing on the accuracy and the metric content of the final result. This paper firstly aims at assessing the accuracy of the simplified and automated calibration procedure offered by the IGN tools. Results obtained with this procedure were compared with those achieved with a test-range calibration approach using a pre-surveyed laboratory test-field. Both direct and a-posteriori validation tests turned out successfully showing the stability and the metric accuracy of the process, even when low textured or reflective surfaces are present in the 3D scene. Afterwards, the possibility of achieving accurate 3D models from the subsequently extracted dense point clouds is also evaluated. Three different types of sculptural elements were chosen as test-objects and "ground-truth" data were acquired with triangulation laser scanners. 3D models derived from point clouds oriented with a simplified relative procedure show a suitable metric accuracy: all comparisons delivered a standard deviation of millimeter-level. The use of Ground Control Points in the orientation phase did not improve significantly the accuracy of the final 3D model, when a small figure-like corbel was used as test-object.

  7. Improving Segmentation of 3D Retina Layers Based on Graph Theory Approach for Low Quality OCT Images

    Directory of Open Access Journals (Sweden)

    Stankiewicz Agnieszka

    2016-06-01

    Full Text Available This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT. Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.

  8. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Directory of Open Access Journals (Sweden)

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  9. A colour image reproduction framework for 3D colour printing

    Science.gov (United States)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  10. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  11. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping...... of planetary surfaces, but other purposes is considered as well. The system performance is measured with respect to the precision and the time consumption.The reconstruction process is divided into four major areas: Acquisition, calibration, matching/reconstruction and presentation. Each of these areas...... are treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  12. Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching

    Science.gov (United States)

    Nam, Woo Hyun; Kang, Dong-Goo; Lee, Duhgoon; Lee, Jae Young; Ra, Jong Beom

    2012-01-01

    The registration of a three-dimensional (3D) ultrasound (US) image with a computed tomography (CT) or magnetic resonance image is beneficial in various clinical applications such as diagnosis and image-guided intervention of the liver. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment, and the success of this process strongly depends on the proper selection of initial transformation parameters. In this paper, we present an automatic feature-based affine registration procedure of 3D intra-operative US and pre-operative CT images of the liver. In the registration procedure, we first segment vessel lumens and the liver surface from a 3D B-mode US image. We then automatically estimate an initial registration transformation by using the proposed edge matching algorithm. The algorithm finds the most likely correspondences between the vessel centerlines of both images in a non-iterative manner based on a modified Viterbi algorithm. Finally, the registration is iteratively refined on the basis of the global affine transformation by jointly using the vessel and liver surface information. The proposed registration algorithm is validated on synthesized datasets and 20 clinical datasets, through both qualitative and quantitative evaluations. Experimental results show that automatic registration can be successfully achieved between 3D B-mode US and CT images even with a large initial misalignment.

  13. Progresses in 3D integral imaging with optical processing

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Corral, Manuel; Martinez-Cuenca, Raul; Saavedra, Genaro; Navarro, Hector; Pons, Amparo [Department of Optics. University of Valencia. Calle Doctor Moliner 50, E46 100, Burjassot (Spain); Javidi, Bahram [Electrical and Computer Engineering Department, University of Connecticut, Storrs, CT 06269-1157 (United States)], E-mail: manuel.martinez@uv.es

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  14. Fully Automatic 3D Reconstruction of Histological Images

    CERN Document Server

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  15. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  16. A Texture Analysis of 3D Radar Images

    NARCIS (Netherlands)

    Deiana, D.; Yarovoy, A.

    2009-01-01

    In this paper a texture feature coding method to be applied to high-resolution 3D radar images in order to improve target detection is developed. An automatic method for image segmentation based on texture features is proposed. The method has been able to automatically detect weak targets which fail

  17. An image-based approach to the reconstruction of ancient architectures by extracting and arranging 3D spatial components

    Institute of Scientific and Technical Information of China (English)

    Divya Udayan J; HyungSeok KIM; Jee-In KIM

    2015-01-01

    The objective of this research is the rapid reconstruction of ancient buildings of historical importance using a single image. The key idea of our approach is to reduce the infi nite solutions that might otherwise arise when recovering a 3D geometry from 2D photographs. The main outcome of our research shows that the proposed methodology can be used to reconstruct ancient monuments for use as proxies for digital effects in applications such as tourism, games, and entertainment, which do not require very accurate modeling. In this article, we consider the reconstruction of ancient Mughal architecture including the Taj Mahal. We propose a modeling pipeline that makes an easy reconstruction possible using a single photograph taken from a single view, without the need to create complex point clouds from multiple images or the use of laser scanners. First, an initial model is automatically reconstructed using locally fi tted planar primitives along with their boundary polygons and the adjacency relation among parts of the polygons. This approach is faster and more accurate than creating a model from scratch because the initial reconstruction phase provides a set of structural information together with the adjacency relation, which makes it possible to estimate the approximate depth of the entire structural monument. Next, we use manual extrapolation and editing techniques with modeling software to assemble and adjust different 3D components of the model. Thus, this research opens up the opportunity for the present generation to experience remote sites of architectural and cultural importance through virtual worlds and real-time mobile applications. Variations of a recreated 3D monument to represent an amalgam of various cultures are targeted for future work.

  18. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    Science.gov (United States)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  19. Fabrication of Large-Scale Microlens Arrays Based on Screen Printing for Integral Imaging 3D Display.

    Science.gov (United States)

    Zhou, Xiongtu; Peng, Yuyan; Peng, Rong; Zeng, Xiangyao; Zhang, Yong-Ai; Guo, Tailiang

    2016-09-14

    The low-cost large-scale fabrication of microlens arrays (MLAs) with precise alignment, great uniformity of focusing, and good converging performance are of great importance for integral imaging 3D display. In this work, a simple and effective method for large-scale polymer microlens arrays using screen printing has been successfully presented. The results show that the MLAs possess high-quality surface morphology and excellent optical performances. Furthermore, the microlens' shape and size, i.e., the diameter, the height, and the distance between two adjacent microlenses of the MLAs can be easily controlled by modifying the reflowing time and the size of open apertures of the screen. MLAs with the neighboring microlenses almost tangent can be achieved under suitable size of open apertures of the screen and reflowing time, which can remarkably reduce the color moiré patterns caused by the stray light between the blank areas of the MLAs in the integral imaging 3D display system, exhibiting much better reconstruction performance.

  20. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  1. Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation

    Science.gov (United States)

    Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.

    2014-05-01

    Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.

  2. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  3. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    Science.gov (United States)

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm.

  4. Interactive visualization of multiresolution image stacks in 3D.

    Science.gov (United States)

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  5. Ultrasonic Sensor Based 3D Mapping & Localization

    Directory of Open Access Journals (Sweden)

    Shadman Fahim Ahmad

    2016-04-01

    Full Text Available This article provides a basic level introduction to 3D mapping using sonar sensors and localization. It describes the methods used to construct a low-cost autonomous robot along with the hardware and software used as well as an insight to the background of autonomous robotic 3D mapping and localization. We have also given an overview to what the future prospects of the robot may hold in 3D based mapping.

  6. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    Energy Technology Data Exchange (ETDEWEB)

    Bertrand, M.M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Macri, F.; Beregi, J.P. [Nimes University Hospital, University Montpellier 1, Radiology Department, Nimes (France); Mazars, R.; Prudhomme, M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Nimes University Hospital, University Montpellier 1, Digestive Surgery Department, Nimes (France); Droupy, S. [Nimes University Hospital, University Montpellier 1, Urology-Andrology Department, Nimes (France)

    2014-08-15

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  7. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Science.gov (United States)

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  8. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    Science.gov (United States)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  9. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  10. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  11. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    Science.gov (United States)

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  12. Progress in 3D imaging and display by integral imaging

    Science.gov (United States)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  13. Design, fabrication, and implementation of voxel-based 3D printed textured phantoms for task-based image quality assessment in CT

    Science.gov (United States)

    Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan

    2016-03-01

    In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2

  14. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Directory of Open Access Journals (Sweden)

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  15. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  16. A Integrated Service Platform for Remote Sensing Image 3D Interpretation and Draughting based on HTML5

    Science.gov (United States)

    LIU, Yiping; XU, Qing; ZhANG, Heng; LV, Liang; LU, Wanjie; WANG, Dandi

    2016-11-01

    The purpose of this paper is to solve the problems of the traditional single system for interpretation and draughting such as inconsistent standards, single function, dependence on plug-ins, closed system and low integration level. On the basis of the comprehensive analysis of the target elements composition, map representation and similar system features, a 3D interpretation and draughting integrated service platform for multi-source, multi-scale and multi-resolution geospatial objects is established based on HTML5 and WebGL, which not only integrates object recognition, access, retrieval, three-dimensional display and test evaluation but also achieves collection, transfer, storage, refreshing and maintenance of data about Geospatial Objects and shows value in certain prospects and potential for growth.

  17. 3D/2D Registration of medical images

    OpenAIRE

    Tomaževič, D.

    2008-01-01

    The topic of this doctoral dissertation is registration of 3D medical images to corresponding projective 2D images, referred to as 3D/2D registration. There are numerous possible applications of 3D/2D registration in image-aided diagnosis and treatment. In most of the applications, 3D/2D registration provides the location and orientation of the structures in a preoperative 3D CT or MR image with respect to intraoperative 2D X-ray images. The proposed doctoral dissertation tries to find origin...

  18. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images.

    Science.gov (United States)

    Jonić, S; Thévenaz, P; Zheng, G; Nolte, L-P; Unser, M

    2006-01-01

    We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  19. ONLY IMAGE BASED FOR THE 3D METRIC SURVEY OF GOTHIC STRUCTURES BY USING FRAME CAMERAS AND PANORAMIC CAMERAS

    OpenAIRE

    Pérez Ramos, A.; G. Robleda Prieto

    2016-01-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization me...

  20. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    Science.gov (United States)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  1. Image-based analysis of the internal microstructure of bone replacement scaffolds fabricated by 3D printing

    Science.gov (United States)

    Irsen, Stephan H.; Leukers, Barbara; Bruckschen, Björn; Tille, Carsten; Seitz, Hermann; Beckmann, Felix; Müller, Bert

    2006-08-01

    Rapid Prototyping and especially the 3D printing, allows generating complex porous ceramic scaffolds directly from powders. Furthermore, these technologies allow manufacturing patient-specific implants of centimeter size with an internal pore network to mimic bony structures including vascularization. Besides the biocompatibility properties of the base material, a high degree of open, interconnected porosity is crucial for the success of the synthetic bone graft. Pores with diameters between 100 and 500 μm are the prerequisite for vascularization to supply the cells with nutrients and oxygen, because simple diffusion transport is ineffective. The quantification of porosity on the macro-, micro-, and nanometer scale using well-established techniques such as Hg-porosimetry and electron microscopy is restricted. Alternatively, we have applied synchrotron-radiation-based micro computed tomography (SRμCT) to determine the porosity with high precision and to validate the macroscopic internal structure of the scaffold. We report on the difficulties in intensity-based segmentation for nanoporous materials but we also elucidate the power of SRμCT in the quantitative analysis of the pores at the different length scales.

  2. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  3. A system for finding a 3D target without a 3D image

    Science.gov (United States)

    West, Jay B.; Maurer, Calvin R., Jr.

    2008-03-01

    We present here a framework for a system that tracks one or more 3D anatomical targets without the need for a preoperative 3D image. Multiple 2D projection images are taken using a tracked, calibrated fluoroscope. The user manually locates each target on each of the fluoroscopic views. A least-squares minimization algorithm triangulates the best-fit position of each target in the 3D space of the tracking system: using the known projection matrices from 3D space into image space, we use matrix minimization to find the 3D position that projects closest to the located target positions in the 2D images. A tracked endoscope, whose projection geometry has been pre-calibrated, is then introduced to the operating field. Because the position of the targets in the tracking space is known, a rendering of the targets may be projected onto the endoscope view, thus allowing the endoscope to be easily brought into the target vicinity even when the endoscope field of view is blocked, e.g. by blood or tissue. An example application for such a device is trauma surgery, e.g., removal of a foreign object. Time, scheduling considerations and concern about excessive radiation exposure may prohibit the acquisition of a 3D image, such as a CT scan, which is required for traditional image guidance systems; it is however advantageous to have 3D information about the target locations available, which is not possible using fluoroscopic guidance alone.

  4. Combined aerial and terrestrial images for complete 3D documentation of Singosari Temple based on Structure from Motion algorithm

    Science.gov (United States)

    Hidayat, Husnul; Cahyono, A. B.

    2016-11-01

    Singosaritemple is one of cultural heritage building in East Java, Indonesia which was built in 1300s and restorated in 1934-1937. Because of its history and importance, complete documentation of this temple is required. Nowadays with the advent of low cost UAVs combining aerial photography with terrestrial photogrammetry gives more complete data for 3D documentation. This research aims to make complete 3D model of this landmark from aerial and terrestrial photographs with Structure from Motion algorithm. To establish correct scale, position, and orientation, the final 3D model was georeferenced with Ground Control Points in UTM 49S coordinate system. The result shows that all facades, floor, and upper structures can be modeled completely in 3D. In terms of 3D coordinate accuracy, the Root Mean Square Errors (RMSEs) are RMSEx=0,041 m; RMSEy=0,031 m; RMSEz=0,049 m which represent 0.071 m displacement in 3D space. In addition the mean difference of lenght measurements of the object is 0,057 m. With this accuracy, this method can be used to map the site up to 1:237 scale. Although the accuracy level is still in centimeters, the combined aerial and terrestrial photographs with Structure from Motion algorithm can provide complete and visually interesting 3D model.

  5. Validation of MRI-based 3D digital atlas registration with histological and autoradiographic volumes: an anatomofunctional transgenic mouse brain imaging study.

    Science.gov (United States)

    Lebenberg, J; Hérard, A-S; Dubois, A; Dauguet, J; Frouin, V; Dhenain, M; Hantraye, P; Delzescaux, T

    2010-07-01

    Murine models are commonly used in neuroscience to improve our knowledge of disease processes and to test drug effects. To accurately study neuroanatomy and brain function in small animals, histological staining and ex vivo autoradiography remain the gold standards to date. These analyses are classically performed by manually tracing regions of interest, which is time-consuming. For this reason, only a few 2D tissue sections are usually processed, resulting in a loss of information. We therefore proposed to match a 3D digital atlas with previously 3D-reconstructed post mortem data to automatically evaluate morphology and function in mouse brain structures. We used a freely available MRI-based 3D digital atlas derived from C57Bl/6J mouse brain scans (9.4T). The histological and autoradiographic volumes used were obtained from a preliminary study in APP(SL)/PS1(M146L) transgenic mice, models of Alzheimer's disease, and their control littermates (PS1(M146L)). We first deformed the original 3D MR images to match our experimental volumes. We then applied deformation parameters to warp the 3D digital atlas to match the data to be studied. The reliability of our method was qualitatively and quantitatively assessed by comparing atlas-based and manual segmentations in 3D. Our approach yields faster and more robust results than standard methods in the investigation of post mortem mouse data sets at the level of brain structures. It also constitutes an original method for the validation of an MRI-based atlas using histology and autoradiography as anatomical and functional references, respectively.

  6. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  7. 3D Simulation of Lunar Surface Based on Single Image%基于单张影像的月面三维仿真

    Institute of Scientific and Technical Information of China (English)

    徐鹍; 周杨; 滕飞; 李建胜

    2012-01-01

    A fast method of 3D reconstruction and illumination simulation of lunar surface based on single image is proposed, using Shape from Shading{SFS) method. By using the 3D clew, gray information of single 2D image, 3D lunar surface is reconstructed by SFS theory fast, and is rendered by improved Hapke illumination model. It is proved that 3D lunar surface is preferably reconstructed and simulated by this method within permitted precision.%根据阴影恢复形状原理,提出一种基于单张影像的快速月面三维建模和光照模拟方法.利用单张2D影像中残留的3D线索——灰度信息,通过SFS方法对月面三维形貌进行快速重建,采用改进的Hapke光照模型对重建后的三维形貌进行渲染.实验结果证明,在精度允许范围内,该方法能快速地实现对月面三维形貌的提取和仿真.

  8. 3D micro-particle image modeling and its application in measurement resolution investigation for visual sensing based axial localization in an optical microscope

    Science.gov (United States)

    Wang, Yuliang; Li, Xiaolai; Bi, Shusheng; Zhu, Xiaofeng; Liu, Jinhua

    2017-01-01

    Visual sensing based three dimensional (3D) particle localization in an optical microscope is important for both fundamental studies and practical applications. Compared with the lateral (X and Y) localization, it is more challenging to achieve a high resolution measurement of axial particle location. In this study, we aim to investigate the effect of different factors on axial measurement resolution through an analytical approach. Analytical models were developed to simulate 3D particle imaging in an optical microscope. A radius vector projection method was applied to convert the simulated particle images into radius vectors. With the obtained radius vectors, a term of axial changing rate was proposed to evaluate the measurement resolution of axial particle localization. Experiments were also conducted for comparison with that obtained through simulation. Moreover, with the proposed method, the effects of particle size on measurement resolution were discussed. The results show that the method provides an efficient approach to investigate the resolution of axial particle localization.

  9. ONLY IMAGE BASED FOR THE 3D METRIC SURVEY OF GOTHIC STRUCTURES BY USING FRAME CAMERAS AND PANORAMIC CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Pérez Ramos

    2016-06-01

    Full Text Available Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  10. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    Science.gov (United States)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  11. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    Science.gov (United States)

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  12. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    Science.gov (United States)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  13. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  14. Glasses-free 3D viewing systems for medical imaging

    Science.gov (United States)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  15. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  16. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging.

    Science.gov (United States)

    Anas, Emran Mohammad Abu; Kim, Jae Gon; Lee, Soo Yeol; Hasan, Md Kamrul

    2011-10-07

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  17. 基于三维重建的绝缘子覆冰图像监测%Insulator icing monitoring based on 3D image reconstruction

    Institute of Scientific and Technical Information of China (English)

    杨浩; 吴畏

    2013-01-01

    提出了一种基于图像的三维重建的绝缘子覆冰在线监测方法.该方法运用计算机双目视觉技术,通过2台位置不同的摄像机采集2幅绝缘子覆冰图像,在对摄像机进行标定后,利用图像间的视差计算出覆冰绝缘子在三维空间的坐标,重建出覆冰绝缘子的三维模型,再由三维点云模型计算得到绝缘子覆冰的厚度及重量.应用所提出方法对多组实测图像进行三维重建和比较分析,结果表明该方法能够准确计算绝缘子覆冰的厚度和重量,还可反映绝缘子覆冰的厚度分布特征.%The online monitoring of insulator icing is proposed based on 3D image reconstruction applying the computer binocular vision technology. Two images of an ice-covered insulator are taken by two cameras in different locations. The coordinates of the insulator in 3D space are calculated by the parallax between two images after the calibration of cameras and its 3D model is then reconstructed. The thickness and weight of ice are calculated based on its point cloud model. 3D models are reconstructed for multiple real image pairs and the calculated results are compared with the manual measurements,which show that the calculation is highly precise and the distribution of icing thickness is got as well.

  18. Review: Polymeric-Based 3D Printing for Tissue Engineering.

    Science.gov (United States)

    Wu, Geng-Hsi; Hsu, Shan-Hui

    Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.

  19. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  20. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Science.gov (United States)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  1. Image-Based 3d Reconstruction Data as AN Analysis and Documentation Tool for Architects: the Case of Plaka Bridge in Greece

    Science.gov (United States)

    Kouimtzoglou, T.; Stathopoulou, E. K.; Agrafiotis, P.; Georgopoulos, A.

    2017-02-01

    Μodern advances in the field of image-based 3D reconstruction of complex architectures are valuable tools that may offer the researchers great possibilities integrating the use of such procedures in their studies. In the same way that photogrammetry was a well-known useful tool among the cultural heritage community for years, the state of the art reconstruction techniques generate complete and easy to use 3D data, thus enabling engineers, architects and other cultural heritage experts to approach their case studies in an exhaustive and efficient way. The generated data can be a valuable and accurate basis upon which further plans and studies will be drafted. These and other aspects of the use of image-based 3D data for architectural studies are to be presented and analysed in this paper, based on the experience gained from a specific case study, the Plaka Bridge. This historic structure is of particular interest, as it was recently lost due to extreme weather conditions and serves as a strong proof that preventive actions are of utmost importance in order to preserve our common past.

  2. Large distance 3D imaging of hidden objects

    Science.gov (United States)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  3. Metal-based nanorods as molecule-specific contrast agents for reflectance imaging in 3D tissues

    OpenAIRE

    Javier, David J.; Nitin, Nitin; Roblyer, Darren M.; Richards-Kortum, Rebecca

    2008-01-01

    Anisotropic metal-based nanomaterials have been proposed as potential contrast agents due to their strong surface plasmon resonance. We evaluated the contrast properties of gold, silver, and gold-silver hybrid nanorods for molecular imaging applications in three-dimensional biological samples. We used diffuse reflectance spectroscopy to predict the contrast properties of different types of nanorods embedded in biological model systems of increasing complexity. The predicted contrast propertie...

  4. A new image reconstruction method for 3-D PET based upon pairs of near-missing lines of response

    Energy Technology Data Exchange (ETDEWEB)

    Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456-8611 (Japan) and Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474-8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644-0023 (Japan)

    2007-02-01

    We formerly introduced a new image reconstruction method for three-dimensional positron emission tomography, which is based upon pairs of near-missing lines of response. This method uses an elementary geometric property of lines of response, namely that two lines of response which originate from radioactive isotopes located within a sufficiently small voxel, will lie within a few millimeters of each other. The effectiveness of this method was verified by performing a simulation using GATE software and a digital Hoffman phantom.

  5. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  6. Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis.

    Science.gov (United States)

    Liu, Jiamin; Udupa, Jayaram K; Saha, Punam K; Odhner, Dewey; Hirsch, Bruce E; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A

    2008-08-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%-97% and 0.2%-0.7%. The method requires 1-2 minutes of operator time and 6-7 min of computer time per data set, which makes it significantly more efficient than live wire-the method currently available for the task that can be used routinely.

  7. Vhrs Stereo Images for 3d Modelling of Buildings

    Science.gov (United States)

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  8. VHRS STEREO IMAGES FOR 3D MODELLING OF BUILDINGS

    Directory of Open Access Journals (Sweden)

    A. Bujakiewicz

    2012-07-01

    Full Text Available The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation – Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control pointsand amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  9. 3D reconstruction of concave surfaces using polarisation imaging

    Science.gov (United States)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  10. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Sung; Park, Kwang Suk [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea); Seoul National University College of Medicine, Department of Biomedical Engineering, Seoul (Korea); Ahn, Soon-Hyun; Oh, Seung Ha; Kim, Chong Sun; Chung, June-Key; Lee, Myung Chul [Seoul National University College of Medicine, Department of Otolaryngology, Head and Neck Surgery, Seoul (Korea); Lee, Dong Soo; Jeong, Jae Min [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea)

    2005-06-01

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1-{sup 14}C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in

  11. Detection and alignment of 3D domain swapping proteins using angle-distance image-based secondary structural matching techniques.

    Directory of Open Access Journals (Sweden)

    Chia-Han Chu

    Full Text Available This work presents a novel detection method for three-dimensional domain swapping (DS, a mechanism for forming protein quaternary structures that can be visualized as if monomers had "opened" their "closed" structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D image transformations of secondary structural elements (SSEs, specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its

  12. Detection and alignment of 3D domain swapping proteins using angle-distance image-based secondary structural matching techniques.

    Science.gov (United States)

    Chu, Chia-Han; Lo, Wei-Cheng; Wang, Hsin-Wei; Hsu, Yen-Chu; Hwang, Jenn-Kang; Lyu, Ping-Chiang; Pai, Tun-Wen; Tang, Chuan Yi

    2010-10-14

    This work presents a novel detection method for three-dimensional domain swapping (DS), a mechanism for forming protein quaternary structures that can be visualized as if monomers had "opened" their "closed" structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D) image transformations of secondary structural elements (SSEs), specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC) and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD) computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its hinge loop

  13. A Legendre orthogonal moment based 3D edge operator

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hui; SHU Huazhong; LUO Limin; J. L. Dillenseger

    2005-01-01

    This paper presents a new 3D edge operator based on Legendre orthogonal moments. This operator can be used to extract the edge of 3D object in any window size,with more accurate surface orientation and more precise surface location. It also has full geometry meaning. Process of calculation is considered in the moment based method.We can greatly speed up the computation by calculating out the masks in advance. We integrate this operator into our rendering of medical image data based on ray casting algorithm. Experimental results show that it is an effective 3D edge operator that is more accurate in position and orientation.

  14. Interactive 2D to 3D stereoscopic image synthesis

    Science.gov (United States)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  15. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    Science.gov (United States)

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  16. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  17. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  18. Image quality assessment of LaBr{sub 3}-based whole-body 3D PET scanners: a Monte Carlo evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Surti, S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Karp, J S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Muehllehner, G [Philips Medical Systems, Philadelphia, PA 19104 (United States)

    2004-10-07

    The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr{sub 3} detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr{sub 3} has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr{sub 3} without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr{sub 3} are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr{sub 3} scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr{sub 3} scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels.

  19. Optimal Point Spread Function Design for 3D Imaging

    Science.gov (United States)

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2015-01-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and super-resolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem – finding the pupil-plane phase pattern that would yield a PSF with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 μm depth of field, and another with an unprecedented 5 μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  20. Multi-level spherical moments based 3D model retrieval

    Institute of Scientific and Technical Information of China (English)

    LIU Wei; HE Yuan-jun

    2006-01-01

    In this paper a novel 3D model retrieval method that employs multi-level spherical moment analysis and relies on voxelization and spherical mapping of the 3D models is proposed. For a given polygon-soup 3D model, first a pose normalization step is done to align the model into a canonical coordinate frame so as to define the shape representation with respect to this orientation. Afterward we rasterize its exterior surface into cubical voxel grids, then a series of homocentric spheres with their center superposing the center of the voxel grids cut the voxel grids into several spherical images. Finally moments belonging to each sphere are computed and the moments of all spheres constitute the descriptor of the model. Experiments showed that Euclidean distance based on this kind of feature vector can distinguish different 3D models well and that the 3D model retrieval system based on this arithmetic yields satisfactory performance.

  1. Dynamic contrast-enhanced 3D photoacoustic imaging

    Science.gov (United States)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  2. Light field display and 3D image reconstruction

    Science.gov (United States)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  3. Phenotypic transition maps of 3D breast acini obtained by imaging-guided agent-based modeling

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Jonathan; Enderling, Heiko; Becker-Weimann, Sabine; Pham, Christopher; Polyzos, Aris; Chen, Chen-Yi; Costes, Sylvain V

    2011-02-18

    We introduce an agent-based model of epithelial cell morphogenesis to explore the complex interplay between apoptosis, proliferation, and polarization. By varying the activity levels of these mechanisms we derived phenotypic transition maps of normal and aberrant morphogenesis. These maps identify homeostatic ranges and morphologic stability conditions. The agent-based model was parameterized and validated using novel high-content image analysis of mammary acini morphogenesis in vitro with focus on time-dependent cell densities, proliferation and death rates, as well as acini morphologies. Model simulations reveal apoptosis being necessary and sufficient for initiating lumen formation, but cell polarization being the pivotal mechanism for maintaining physiological epithelium morphology and acini sphericity. Furthermore, simulations highlight that acinus growth arrest in normal acini can be achieved by controlling the fraction of proliferating cells. Interestingly, our simulations reveal a synergism between polarization and apoptosis in enhancing growth arrest. After validating the model with experimental data from a normal human breast line (MCF10A), the system was challenged to predict the growth of MCF10A where AKT-1 was overexpressed, leading to reduced apoptosis. As previously reported, this led to non growth-arrested acini, with very large sizes and partially filled lumen. However, surprisingly, image analysis revealed a much lower nuclear density than observed for normal acini. The growth kinetics indicates that these acini grew faster than the cells comprising it. The in silico model could not replicate this behavior, contradicting the classic paradigm that ductal carcinoma in situ is only the result of high proliferation and low apoptosis. Our simulations suggest that overexpression of AKT-1 must also perturb cell-cell and cell-ECM communication, reminding us that extracellular context can dictate cellular behavior.

  4. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  5. 3D Imaging with Structured Illumination for Advanced Security Applications

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  6. 3D passive integral imaging using compressive sensing.

    Science.gov (United States)

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

    2012-11-19

    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  7. 3D Objects Reconstruction from Image Data

    OpenAIRE

    Cír, Filip

    2008-01-01

    Tato práce se zabývá 3D rekonstrukcí z obrazových dat. Jsou popsány možnosti a přístupy k optickému skenování. Ruční optický 3D skener se skládá z kamery a zdroje čárového laseru, který je vzhledem ke kameře upevněn pod určitým úhlem. Je navržena vhodná podložka se značkami a je popsán algoritmus pro jejich real-time detekci. Po detekci značek lze vypočítat pozici a orientaci kamery. Na závěr je popsána detekce laseru a postup při výpočtu bodů na povrchu objektu pomocí triangulace. This pa...

  8. Inner and outer coronary vessel wall segmentation from CCTA using an active contour model with machine learning-based 3D voxel context-aware image force

    Science.gov (United States)

    Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.

    2016-03-01

    In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).

  9. Accuracy of 3D cartilage models generated from MR images is dependent on cartilage thickness: laser scanner based validation of in vivo cartilage.

    Science.gov (United States)

    Koo, Seungbum; Giori, Nicholas J; Gold, Garry E; Dyrby, Chris O; Andriacchi, Thomas P

    2009-12-01

    Cartilage morphology change is an important biomarker for the progression of osteoarthritis. The purpose of this study was to assess the accuracy of in vivo cartilage thickness measurements from MR image-based 3D cartilage models using a laser scanning method and to test if the accuracy changes with cartilage thickness. Three-dimensional tibial cartilage models were created from MR images (in-plane resolution of 0.55 mm and thickness of 1.5 mm) of osteoarthritic knees of ten patients prior to total knee replacement surgery using a semi-automated B-spline segmentation algorithm. Following surgery, the resected tibial plateaus were laser scanned and made into 3D models. The MR image and laser-scan based models were registered to each other using a shape matching technique. The thicknesses were compared point wise for the overall surface. The linear mixed-effects model was used for statistical test. On average, taking account of individual variations, the thickness measurements in MRI were overestimated in thinner (<2.5 mm) regions. The cartilage thicker than 2.5 mm was accurately predicted in MRI, though the thick cartilage in the central regions was underestimated. The accuracy of thickness measurements in the MRI-derived cartilage models systemically varied according to native cartilage thickness.

  10. A novel 3D graph cut based co-segmentation of lung tumor on PET-CT images with Gaussian mixture models

    Science.gov (United States)

    Yu, Kai; Chen, Xinjian; Shi, Fei; Zhu, Weifang; Zhang, Bin; Xiang, Dehui

    2016-03-01

    Positron Emission Tomography (PET) and Computed Tomography (CT) have been widely used in clinical practice for radiation therapy. Most existing methods only used one image modality, either PET or CT, which suffers from the low spatial resolution in PET or low contrast in CT. In this paper, a novel 3D graph cut method is proposed, which integrated Gaussian Mixture Models (GMMs) into the graph cut method. We also employed the random walk method as an initialization step to provide object seeds for the improvement of the graph cut based segmentation on PET and CT images. The constructed graph consists of two sub-graphs and a special link between the sub-graphs which penalize the difference segmentation between the two modalities. Finally, the segmentation problem is solved by the max-flow/min-cut method. The proposed method was tested on 20 patients' PET-CT images, and the experimental results demonstrated the accuracy and efficiency of the proposed algorithm.

  11. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  12. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant'Omobono and Sarno project's strategies

    Science.gov (United States)

    Gianolio, S.; Mermati, F.; Genovese, G.

    2014-06-01

    This paper presents a "standard" method that is being developed by ARESlab of Rome's La Sapienza University for the documentation and the representation of the archaeological artifacts and structures through automatic photogrammetry software. The image-based 3D modeling technique was applied in two projects: in Sarno and in Rome. The first is a small city in Campania region along Via Popilia, known as the ancient way from Capua to Rhegion. The interest in this city is based on the recovery of over 2100 tombs from local necropolis that contained more than 100.000 artifacts collected in "Museo Nazionale Archeologico della Valle del Sarno". In Rome the project regards the archaeological area of Insula Volusiana placed in Forum Boarium close to Sant'Omobono sacred area. During the studies photographs were taken by Canon EOS 5D Mark II and Canon EOS 600D cameras. 3D model and meshes were created in Photoscan software. The TOF-CW Z+F IMAGER® 5006h laser scanner is used to dense data collection of archaeological area of Rome and to make a metric comparison between range-based and image-based techniques. In these projects the IBM as a low-cost technique proved to be a high accuracy improvement if planned correctly and it shown also how it helps to obtain a relief of complex strata and architectures compared to traditional manual documentation methods (e.g. two-dimensional drawings). The multidimensional recording can be used for future studies of the archaeological heritage, especially for the "destructive" character of an excavation. The presented methodology is suitable for the 3D registration and the accuracy of the methodology improved also the scientific value.

  13. Dynamic 3D computed tomography scanner for vascular imaging

    Science.gov (United States)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  14. De la manipulation des images 3D

    Directory of Open Access Journals (Sweden)

    Geneviève Pinçon

    2012-04-01

    Full Text Available Si les technologies 3D livrent un enregistrement précis et pertinent des graphismes pariétaux, elles offrent également des applications particulièrement intéressantes pour leur analyse. À travers des traitements sur nuage de points et des simulations, elles autorisent un large éventail de manipulations touchant autant à l’observation qu’à l’étude des œuvres pariétales. Elles permettent notamment une perception affinée de leur volumétrie, et deviennent des outils de comparaison de formes très utiles dans la reconstruction des chronologies pariétales et dans l’appréhension des analogies entre différents sites. Ces outils analytiques sont ici illustrés par les travaux originaux menés sur les sculptures pariétales des abris du Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne et de la Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente.If 3D technologies allow an accurate and relevant recording of rock art, they also offer several interesting applications for its analysis. Through spots clouds treatments and simulations, they permit a wide range of manipulations concerning figurations observation and study. Especially, they allow a fine perception of their volumetry. They become efficient tools for forms comparisons, very useful in the reconstruction of graphic ensemble chronologies and for inter-sites analogies. These analytical tools are illustrated by the original works done on the sculptures of Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne and Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente rock-shelters.

  15. Joint calibration of 3D resist image and CDSEM

    Science.gov (United States)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  16. Calibration of Images with 3D range scanner data

    OpenAIRE

    Adalid López, Víctor Javier

    2009-01-01

    Projecte fet en col.laboració amb EPFL 3D laser range scanners are used in extraction of the 3D data in a scene. Main application areas are architecture, archeology and city planning. Thought the raw scanner data has a gray scale values, the 3D data can be merged with colour camera image values to get textured 3D model of the scene. Also these devices are able to take a reliable copy in 3D form objects, with a high level of accuracy. Therefore, they scanned scenes can be use...

  17. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    Science.gov (United States)

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  18. 3D Ground Penetrating Imaging Radar

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    GPiR (ground-penetrating imaging radar) is a new technology for mapping the shallow subsurface, including society’s underground infrastructure. Applications for this technology include efficient and precise mapping of buried utilities on a large scale.

  19. Visualizing Vertebrate Embryos with Episcopic 3D Imaging Techniques

    Directory of Open Access Journals (Sweden)

    Stefan H. Geyer

    2009-01-01

    Full Text Available The creation of highly detailed, three-dimensional (3D computer models is essential in order to understand the evolution and development of vertebrate embryos, and the pathogenesis of hereditary diseases. A still-increasing number of methods allow for generating digital volume data sets as the basis of virtual 3D computer models. This work aims to provide a brief overview about modern volume data–generation techniques, focusing on episcopic 3D imaging methods. The technical principles, advantages, and problems of episcopic 3D imaging are described. The strengths and weaknesses in its ability to visualize embryo anatomy and labeled gene product patterns, specifically, are discussed.

  20. Surgeon-Based 3D Printing for Microvascular Bone Flaps.

    Science.gov (United States)

    Taylor, Erin M; Iorio, Matthew L

    2017-03-04

    Background Three-dimensional (3D) printing has developed as a revolutionary technology with the capacity to design accurate physical models in preoperative planning. We present our experience in surgeon-based design of 3D models, using home 3D software and printing technology for use as an adjunct in vascularized bone transfer. Methods Home 3D printing techniques were used in the design and execution of vascularized bone flap transfers to the upper extremity. Open source imaging software was used to convert preoperative computed tomography scans and create 3D models. These were printed in the surgeon's office as 3D models for the planned reconstruction. Vascularized bone flaps were designed intraoperatively based on the 3D printed models. Results Three-dimensional models were created for intraoperative use in vascularized bone flaps, including (1) medial femoral trochlea (MFT) flap for scaphoid avascular necrosis and nonunion, (2) MFT flap for lunate avascular necrosis and nonunion, (3) medial femoral condyle (MFC) flap for wrist arthrodesis, and (4) free fibula osteocutaneous flap for distal radius septic nonunion. Templates based on the 3D models allowed for the precise and rapid contouring of well-vascularized bone flaps in situ, prior to ligating the donor pedicle. Conclusions Surgeon-based 3D printing is a feasible, innovative technology that allows for the precise and rapid contouring of models that can be created in various configurations for pre- and intraoperative planning. The technology is easy to use, convenient, and highly economical as compared with traditional send-out manufacturing. Surgeon-based 3D printing is a useful adjunct in vascularized bone transfer. Level of Evidence Level IV.

  1. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition

    Directory of Open Access Journals (Sweden)

    Abdulla Al-Rawabdeh

    2016-01-01

    Full Text Available Landslides often cause economic losses, property damage, and loss of lives. Monitoring landslides using high spatial and temporal resolution imagery and the ability to quickly identify landslide regions are the basis for emergency disaster management. This study presents a comprehensive system that uses unmanned aerial vehicles (UAVs and Semi-Global dense Matching (SGM techniques to identify and extract landslide scarp data. The selected study area is located along a major highway in a mountainous region in Jordan, and contains creeping landslides induced by heavy rainfall. Field observations across the slope body and a deformation analysis along the highway and existing gabions indicate that the slope is active and that scarp features across the slope will continue to open and develop new tension crack features, leading to the downward movement of rocks. The identification of landslide scarps in this study was performed via a dense 3D point cloud of topographic information generated from high-resolution images captured using a low-cost UAV and a target-based camera calibration procedure for a low-cost large-field-of-view camera. An automated approach was used to accurately detect and extract the landslide head scarps based on geomorphological factors: the ratio of normalized Eigenvalues (i.e., λ1/λ2 ≥ λ3 derived using principal component analysis, topographic surface roughness index values, and local-neighborhood slope measurements from the 3D image-based point cloud. Validation of the results was performed using root mean square error analysis and a confusion (error matrix between manually digitized landslide scarps and the automated approaches. The experimental results using the fully automated 3D point-based analysis algorithms show that these approaches can effectively distinguish landslide scarps. The proposed algorithms can accurately identify and extract landslide scarps with centimeter-scale accuracy. In addition, the combination

  2. Effective classification of 3D image data using partitioning methods

    Science.gov (United States)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  3. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  4. Development and evaluation of a LOR-based image reconstruction with 3D system response modeling for a PET insert with dual-layer offset crystal design

    Science.gov (United States)

    Zhang, Xuezhu; Stortz, Greg; Sossi, Vesna; Thompson, Christopher J.; Retière, Fabrice; Kozlowski, Piotr; Thiessen, Jonathan D.; Goertzen, Andrew L.

    2013-12-01

    In this study we present a method of 3D system response calculation for analytical computer simulation and statistical image reconstruction for a magnetic resonance imaging (MRI) compatible positron emission tomography (PET) insert system that uses a dual-layer offset (DLO) crystal design. The general analytical system response functions (SRFs) for detector geometric and inter-crystal penetration of coincident crystal pairs are derived first. We implemented a 3D ray-tracing algorithm with 4π sampling for calculating the SRFs of coincident pairs of individual DLO crystals. The determination of which detector blocks are intersected by a gamma ray is made by calculating the intersection of the ray with virtual cylinders with radii just inside the inner surface and just outside the outer-edge of each crystal layer of the detector ring. For efficient ray-tracing computation, the detector block and ray to be traced are then rotated so that the crystals are aligned along the X-axis, facilitating calculation of ray/crystal boundary intersection points. This algorithm can be applied to any system geometry using either single-layer (SL) or multi-layer array design with or without offset crystals. For effective data organization, a direct lines of response (LOR)-based indexed histogram-mode method is also presented in this work. SRF calculation is performed on-the-fly in both forward and back projection procedures during each iteration of image reconstruction, with acceleration through use of eight-fold geometric symmetry and multi-threaded parallel computation. To validate the proposed methods, we performed a series of analytical and Monte Carlo computer simulations for different system geometry and detector designs. The full-width-at-half-maximum of the numerical SRFs in both radial and tangential directions are calculated and compared for various system designs. By inspecting the sinograms obtained for different detector geometries, it can be seen that the DLO crystal

  5. Intensity-Based Registration of Freehand 3D Ultrasound and CT-scan Images of the Kidney

    CERN Document Server

    Leroy, Antoine; Payan, Yohan; Troccaz, Jocelyne

    2007-01-01

    This paper presents a method to register a pre-operative Computed-Tomography (CT) volume to a sparse set of intra-operative Ultra-Sound (US) slices. In the context of percutaneous renal puncture, the aim is to transfer planning information to an intra-operative coordinate system. The spatial position of the US slices is measured by optically localizing a calibrated probe. Assuming the reproducibility of kidney motion during breathing, and no deformation of the organ, the method consists in optimizing a rigid 6 Degree Of Freedom (DOF) transform by evaluating at each step the similarity between the set of US images and the CT volume. The correlation between CT and US images being naturally rather poor, the images have been preprocessed in order to increase their similarity. Among the similarity measures formerly studied in the context of medical image registration, Correlation Ratio (CR) turned out to be one of the most accurate and appropriate, particularly with the chosen non-derivative minimization scheme, n...

  6. Self-calibration of cone-beam CT geometry using 3D-2D image registration: development and application to tasked-based imaging with a robotic C-arm

    Science.gov (United States)

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods: Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting "self-calibration" was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results: The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard ("true") calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the "self" and "true" calibration methods were on the order of 10-3 mm-1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion: The proposed geometric "self" calibration provides a means for 3D imaging on general noncircular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced "task-based" 3D imaging methods now in development for robotic C-arms.

  7. Acoustic 3D imaging of dental structures

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  8. Validation tests of open-source procedures for digital camera calibration and 3D image-based modelling

    OpenAIRE

    I. Toschi; Rivola, R.; Bertacchini, E; Castagnetti, C.; M. Dubbini; Capra, A.

    2013-01-01

    Among the many open-source software solutions recently developed for the extraction of point clouds from a set of un-oriented images, the photogrammetric tools Apero and MicMac (IGN, Institut Géographique National) aim to distinguish themselves by focusing on the accuracy and the metric content of the final result. This paper firstly aims at assessing the accuracy of the simplified and automated calibration procedure offered by the IGN tools. Results obtained with this procedure were...

  9. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  10. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    Science.gov (United States)

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  11. Preliminary examples of 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental ult...

  12. SIFT algorithm-based 3D pose estimation of femur.

    Science.gov (United States)

    Zhang, Xuehe; Zhu, Yanhe; Li, Changle; Zhao, Jie; Li, Ge

    2014-01-01

    To address the lack of 3D space information in the digital radiography of a patient femur, a pose estimation method based on 2D-3D rigid registration is proposed in this study. The method uses two digital radiography images to realize the preoperative 3D visualization of a fractured femur. Compared with the pure Digital Radiography or Computed Tomography imaging diagnostic methods, the proposed method has the advantages of low cost, high precision, and minimal harmful radiation. First, stable matching point pairs in the frontal and lateral images of the patient femur and the universal femur are obtained by using the Scale Invariant Feature Transform method. Then, the 3D pose estimation registration parameters of the femur are calculated by using the Iterative Closest Point (ICP) algorithm. Finally, based on the deviation between the six degrees freedom parameter calculated by the proposed method, preset posture parameters are calculated to evaluate registration accuracy. After registration, the rotation error is less than l.5°, and the translation error is less than 1.2 mm, which indicate that the proposed method has high precision and robustness. The proposed method provides 3D image information for effective preoperative orthopedic diagnosis and surgery planning.

  13. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    Science.gov (United States)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  14. A 3D algorithm based on the combined inversion of Rayleigh and Love waves for imaging and monitoring of shallow structures

    Science.gov (United States)

    Pilz, Marco; Parolai, Stefano; Woith, Heiko

    2017-01-01

    SUMMARYIn recent years there has been increasing interest in the study of seismic noise interferometry as it can provide a complementary approach to active source or earthquake based methods for imaging and continuous monitoring the shallow structure of the Earth. This meaningful information is extracted from wavefields propagating between those receiver positions at which seismic noise was recorded. Until recently, noise-based imaging relied mostly on Rayleigh waves. However, considering similar wavelengths, a combined use of Rayleigh and Love wave tomography can succeed in retrieving velocity heterogeneities at depth due to their different sensitivity kernels. Here we present a novel one-step algorithm for simultaneously inverting Rayleigh and Love wave dispersion data aiming at identifying and describing complex 3D velocity structures. The algorithm may help to accurately and efficiently map the shear-wave velocities and the Poisson ratio of the surficial soil layers. In the high-frequency range, the scattered part of the correlation functions stabilizes sufficiently fast to provide a reliable estimate of the velocity structure not only for imaging purposes but also allows for changes in the medium properties to be monitored. Such monitoring can be achieved with a high spatial resolution in 3D and with a time resolution as small as a few hours. In this article, we describe a recent array experiment in a volcanic environment in Solfatara (Italy) and we show that this novel approach has identified strong velocity variations at the interface between liquids and gas-dominated reservoirs, allowing localizing a region which is highly dynamic due to the interaction between the deep convection and its surroundings.

  15. 3D measurement system based on computer-generated gratings

    Science.gov (United States)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  16. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  17. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    NARCIS (Netherlands)

    Tran, Thanh N; Nguyen, Thanh T; Willemsz, Tofan A; van Kessel, Gijs; Frijlink, Henderik W; van der Voort Maarschalk, Kees

    2012-01-01

    Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of

  18. SHINKEI - a novel 3D isotropic MR neurography technique: technical advantages over 3DIRTSE-based imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kasper, Jared M.; Wadhwa, Vibhor; Xi, Yin [University of Texas Southwestern Medical Center, Musculoskeletal Radiology, Dallas, TX (United States); Scott, Kelly M. [University of Texas Southwestern Medical Center, Physical Medicine and Rehabilitation, Dallas, TX (United States); Rozen, Shai [University of Texas Southwestern Medical Center, Plastic Surgery, Dallas, TX (United States); Chhabra, Avneesh [University of Texas Southwestern Medical Center, Musculoskeletal Radiology, Dallas, TX (United States); Johns Hopkins University, Baltimore, MD (United States)

    2015-06-01

    Technical assessment of SHINKEI pulse sequence and conventional 3DIRTSE for LS plexus MR neurography. Twenty-one MR neurography examinations of the LS plexus were performed at 3 T, using 1.5-mm isotropic 3DIRTSE and SHINKEI sequences. Images were evaluated for motion and pulsation artefacts, nerve signal-to-noise ratio, contrast-to-noise ratio, nerve-to-fat ratio, muscle-to-fat ratio, fat suppression homogeneity and depiction of LS plexus branches. Paired Student t test was used to assess differences in nerve conspicuity (p < 0.05 was considered statistically significant). ICC correlation was obtained for intraobserver performance. Four examinations were excluded due to prior spine surgery. Bowel motion artefacts, pulsation artefacts, heterogeneous fat saturation and patient motion were seen in 16/17, 0/17, 17/17, 2/17 on 3DIRTSE and 0/17, 0/17, 0/17, 1/17 on SHINKEI. SHINKEI performed better (p < 0.01) for nerve signal-to-noise, contrast-to-noise, nerve-to-fat and muscle-to-fat ratios. 3DIRTSE and SHINKEI showed all LS plexus nerve roots, sciatic and femoral nerves. Smaller branches including obturator, lateral femoral cutaneous and iliohypogastric nerves were seen in 10/17, 5/17, 1/17 on 3DIRTSE and 17/17, 16/17, 7/17 on SHINKEI. Intraobserver reliability was excellent. SHINKEI MRN demonstrates homogeneous and superior fat suppression with increased nerve signal- and contrast-to-noise ratios resulting in better conspicuity of smaller LS plexus branches. (orig.)

  19. Comprehensive Non-Destructive Conservation Documentation of Lunar Samples Using High-Resolution Image-Based 3D Reconstructions and X-Ray CT Data

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2015-01-01

    Established contemporary conservation methods within the fields of Natural and Cultural Heritage encourage an interdisciplinary approach to preservation of heritage material (both tangible and intangible) that holds "Outstanding Universal Value" for our global community. NASA's lunar samples were acquired from the moon for the primary purpose of intensive scientific investigation. These samples, however, also invoke cultural significance, as evidenced by the millions of people per year that visit lunar displays in museums and heritage centers around the world. Being both scientifically and culturally significant, the lunar samples require a unique conservation approach. Government mandate dictates that NASA's Astromaterials Acquisition and Curation Office develop and maintain protocols for "documentation, preservation, preparation and distribution of samples for research, education and public outreach" for both current and future collections of astromaterials. Documentation, considered the first stage within the conservation methodology, has evolved many new techniques since curation protocols for the lunar samples were first implemented, and the development of new documentation strategies for current and future astromaterials is beneficial to keeping curation protocols up to date. We have developed and tested a comprehensive non-destructive documentation technique using high-resolution image-based 3D reconstruction and X-ray CT (XCT) data in order to create interactive 3D models of lunar samples that would ultimately be served to both researchers and the public. These data enhance preliminary scientific investigations including targeted sample requests, and also provide a new visual platform for the public to experience and interact with the lunar samples. We intend to serve these data as they are acquired on NASA's Astromaterials Acquisistion and Curation website at http://curator.jsc.nasa.gov/. Providing 3D interior and exterior documentation of astromaterial

  20. Microscopic Image 3D Reconstruction Based on Laplace Algorithm%基于Laplace的显微物体三维形貌快速重构

    Institute of Scientific and Technical Information of China (English)

    胡致杰; 何晓昀

    2015-01-01

    显微物体表面三维形貌观测与分析在工业、医学、艺术、计算机等领域具有越来越重要的应用价值,也发挥着越来越重要的作用.该文以能够快速、廉价、非接触地重构出显微物体三维形貌为目标,提出一种基于改进的Laplace算子值进行图像的聚焦评价、高度测量和三维重构的方法,该方法通过计算每个像素点的改进Laplace值,帮助用户快速实现显微物体的三维形貌重构,将显微物体的形貌观测和分析从二维扩展到三维.大量的实验结果和用户体验反馈信息进一步验证了该方法的有效性和实用性.%The observation and analysis of three-dimensional surface of the micro object is more and more important in the ifelds of industry,medicine,art,computer and so on.We propose a efifcient algorithm based on improved Laplace operator, aiming at the effective 3D-reconstruction of the microscopic object surface images fast,cheaply and non-contactly,the algorithm can evaluate focus,gain focus height and reconstruct 3D shape by calculating the improved Laplace operator value of each pixel,extend the morphology observations and analysis of microscopic object from 2D to 3D.A large number of experimental results and a relative user experience demonstrate the effectiveness and application values of the algorithm.

  1. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  2. Individualized directional microphone optimization in hearing aids based on reconstructing the 3D geometry of the head and ear from 2D images

    DEFF Research Database (Denmark)

    Harder, Stine

    aid. We verify the directional filters optimized from simulated HRTFs based on a listener-specific head model against two set of optimal filters. The first set of optimal filters is calculated from HRTFs measured on a 3D printed version of the head model. The second set of optimal filters...... individuals who deviate from an average of the population could benefit from having individualized filters. We developed a pipeline for 3D printing of full size human heads. The 3D printed head facilitated the second verification step, which revealed a 0:3 dB reduction from optimal to simulated directional...... filters. This indicates that the simulation are more similar to measurements on the 3D printed head than measurements on the human subject. We suggest that the larger difference between simulation and human measurements could arise due to small geometrical errors in the head model or due to differences...

  3. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  4. Autostereoscopic 3D projection display based on two lenticular sheets

    Institute of Scientific and Technical Information of China (English)

    Lin Qi; Qionghua Wang; Jiangyong Luo; Aihong Wang; Dong Liang

    2012-01-01

    We propose an autostereoscopic three-dimensional (3D) projection display.The display consists of four projectors,a projection screen,and two lenticular sheets.The operation principle and calculation equations are described in detail and the parallax images are corrected by means of homography.A 50-inch autostereoscopic 3D projection display prototype is developed.The normalized luminance distributions of viewing zones from the simulation and the measurement are given.Results agree well with the designed values.The proposed prototype presents full-resolution 3D images similar to the conventional prototype based on two parallax barriers.Moreover,the proposed prototype shows considerably higher brightness and efficiency of light utilization.

  5. A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement

    Science.gov (United States)

    Vornicu, I.; Carmona-Galán, R.; Rodríguez-Vázquez, Á.

    2015-03-01

    The design and measurements of a CMOS 64 × 64 Single-Photon Avalanche-Diode (SPAD) array with in-pixel Time-to-Digital Converter (TDC) are presented. This paper thoroughly describes the imager at architectural and circuit level with particular emphasis on the characterization of the SPAD-detector ensemble. It is aimed to 2D imaging and 3D image reconstruction in low light environments. It has been fabricated in a standard 0.18μm CMOS process, i. e. without high voltage or low noise features. In these circumstances, we are facing a high number of dark counts and low photon detection efficiency. Several techniques have been applied to ensure proper functionality, namely: i) time-gated SPAD front-end with fast active-quenching/recharge circuit featuring tunable dead-time, ii) reverse start-stop scheme, iii) programmable time resolution of the TDC based on a novel pseudo-differential voltage controlled ring oscillator with fast start-up, iv) a global calibration scheme against temperature and process variation. Measurements results of individual SPAD-TDC ensemble jitter, array uniformity and time resolution programmability are also provided.

  6. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    Science.gov (United States)

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision.

  7. SU-E-T-300: Dosimetric Comparision of 4D Radiation Therapy and 3D Radiation Therapy for the Liver Tumor Based On 4D Medical Image

    Energy Technology Data Exchange (ETDEWEB)

    Ma, C; Yin, Y [Shandong Tumor Hospital, Jinan, Shandong Provice (China)

    2015-06-15

    Purpose: The purpose of this work was to determine the dosimetric benefit to normal tissues by tracking liver tumor dose in four dimensional radiation therapy (4DRT) on ten phases of four dimensional computer tomagraphy(4DCT) images. Methods: Target tracking each phase with the beam aperture for ten liver cancer patients were converted to cumulative plan and compared to the 3D plan with a merged target volume based on 4DCT image in radiation treatment planning system (TPS). The change in normal tissue dose was evaluated in the plan by using the parameters V5, V10, V15, V20,V25, V30, V35 and V40 (volumes receiving 5, 10, 15, 20, 25, 30, 35 and 40Gy, respectively) in the dose-volume histogram for the liver; mean dose for the following structures: liver, left kidney and right kidney; and maximum dose for the following structures: bowel, duodenum, esophagus, stomach and heart. Results: There was significant difference between 4D PTV(average 115.71cm3 )and ITV(169.86 cm3). When the planning objective is 95% volume of PTV covered by the prescription dose, the mean dose for the liver, left kidney and right kidney have an average decrease 23.13%, 49.51%, and 54.38%, respectively. The maximum dose for bowel, duodenum,esophagus, stomach and heart have an average decrease 16.77%, 28.07%, 24.28%, 4.89%, and 4.45%, respectively. Compared to 3D RT, radiation volume for the liver V5, V10, V15, V20, V25, V30, V35 and V40 by using the 4D plans have a significant decrease(P≤0.05). Conclusion: The 4D plan method creates plans that permit better sparing of the normal structures than the commonly used ITV method, which delivers the same dosimetric effects to the target.

  8. WE-G-207-06: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Physical Phantom and Clinical Patient Images

    Energy Technology Data Exchange (ETDEWEB)

    Dhou, S; Cai, W; Hurwitz, M; Rottmann, J; Myronakis, M; Cifter, F; Berbeco, R; Lewis, J [Brigham and Women’s Hospital, Boston, MA (United States); Williams, C [Harvard Medical School, Cambridge, MA (United States); Mishra, P [Varian Medical Systems, Palo Alto, CA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States)

    2015-06-15

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential

  9. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    Science.gov (United States)

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  10. 基于几何图像滤波的3D人脸识别算法%3D face recognition by applying filter based on geometry image

    Institute of Scientific and Technical Information of China (English)

    蔡亮; 达飞鹏

    2012-01-01

    Aiming at the 3D face recognition under expression variation, a feature extraction method by applying filter on geometry image is proposed. The design for the optimal convolution filter is presented based on the distribution function of filtered features. First,after objectively mapping facial mesh into square domain based on mesh parameterization, a 2D geometry image with 3D shape is obtained by linear interpolation. Then, the entire images in the training set are segmented into patches which are used for the differential evolution algorithm to design the optimal convolution filters. Finally, the similarity scores between local features are computed by applying these filters on corresponding patches, and the final decision is made by combining results of these scores. The experimental results from FRGC ( face recognition grand challenge) v2 (version 2) databases show that both accuracy and robustness are improved by applying filter on geometry image.%针对表情变化下的三维人脸识别问题,提出了一种基于几何图像滤波的特征提取方法,并根据样本图像滤波后的特征分布函数给出最优卷积滤波器的设计过程.首先,利用网格平面参数化方法,将人脸网格映射到边界为正四边形的平面区域内,经过线性插值采样得到具有三维形状的二维几何图像;然后,将整体几何图像切割成局部分块图像的集合,在每组局部分块图像构成的训练样本库中利用差分进化算法对滤波器进行优化设计;最后,利用训练得到的最优滤波器提取对应分块图像的局部特征并计算相似度,将相似度得分融合,即可得到最终识别结果.利用FRGC v2人脸数据库进行实验验证,结果表明,使用几何图像滤波能显著提高算法的精度和鲁棒性.

  11. Scalable 3D GIS environment managed by 3D-XML-based modeling

    Science.gov (United States)

    Shi, Beiqi; Rui, Jianxun; Chen, Neng

    2008-10-01

    Nowadays, the namely 3D GIS technologies become a key factor in establishing and maintaining large-scale 3D geoinformation services. However, with the rapidly increasing size and complexity of the 3D models being acquired, a pressing needed for suitable data management solutions has become apparent. This paper outlines that storage and exchange of geospatial data between databases and different front ends like 3D models, GIS or internet browsers require a standardized format which is capable to represent instances of 3D GIS models, to minimize loss of information during data transfer and to reduce interface development efforts. After a review of previous methods for spatial 3D data management, a universal lightweight XML-based format for quick and easy sharing of 3D GIS data is presented. 3D data management based on XML is a solution meeting the requirements as stated, which can provide an efficient means for opening a new standard way to create an arbitrary data structure and share it over the Internet. To manage reality-based 3D models, this paper uses 3DXML produced by Dassault Systemes. 3DXML uses opening XML schemas to communicate product geometry, structure and graphical display properties. It can be read, written and enriched by standard tools; and allows users to add extensions based on their own specific requirements. The paper concludes with the presentation of projects from application areas which will benefit from the functionality presented above.

  12. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV Digital Images

    Directory of Open Access Journals (Sweden)

    Zarnowski Aleksander

    2015-12-01

    Full Text Available Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV. Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition.

  13. Diffusible iodine-based contrast-enhanced computed tomography (diceCT): an emerging tool for rapid, high-resolution, 3-D imaging of metazoan soft tissues.

    Science.gov (United States)

    Gignac, Paul M; Kley, Nathan J; Clarke, Julia A; Colbert, Matthew W; Morhardt, Ashley C; Cerio, Donald; Cost, Ian N; Cox, Philip G; Daza, Juan D; Early, Catherine M; Echols, M Scott; Henkelman, R Mark; Herdina, A Nele; Holliday, Casey M; Li, Zhiheng; Mahlow, Kristin; Merchant, Samer; Müller, Johannes; Orsbon, Courtney P; Paluh, Daniel J; Thies, Monte L; Tsai, Henry P; Witmer, Lawrence M

    2016-06-01

    Morphologists have historically had to rely on destructive procedures to visualize the three-dimensional (3-D) anatomy of animals. More recently, however, non-destructive techniques have come to the forefront. These include X-ray computed tomography (CT), which has been used most commonly to examine the mineralized, hard-tissue anatomy of living and fossil metazoans. One relatively new and potentially transformative aspect of current CT-based research is the use of chemical agents to render visible, and differentiate between, soft-tissue structures in X-ray images. Specifically, iodine has emerged as one of the most widely used of these contrast agents among animal morphologists due to its ease of handling, cost effectiveness, and differential affinities for major types of soft tissues. The rapid adoption of iodine-based contrast agents has resulted in a proliferation of distinct specimen preparations and scanning parameter choices, as well as an increasing variety of imaging hardware and software preferences. Here we provide a critical review of the recent contributions to iodine-based, contrast-enhanced CT research to enable researchers just beginning to employ contrast enhancement to make sense of this complex new landscape of methodologies. We provide a detailed summary of recent case studies, assess factors that govern success at each step of the specimen storage, preparation, and imaging processes, and make recommendations for standardizing both techniques and reporting practices. Finally, we discuss potential cutting-edge applications of diffusible iodine-based contrast-enhanced computed tomography (diceCT) and the issues that must still be overcome to facilitate the broader adoption of diceCT going forward.

  14. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  15. 3D Images of Materials Structures Processing and Analysis

    CERN Document Server

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  16. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  17. High Resolution 3D Radar Imaging of Comet Interiors

    Science.gov (United States)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  18. Volumetric ultrasound panorama based on 3D SIFT.

    Science.gov (United States)

    Ni, Dong; Qul, Yingge; Yang, Xuan; Chui, Yim Pan; Wong, Tien-Tsin; Ho, Simon S M; Heng, Pheng Ann

    2008-01-01

    The reconstruction of three-dimensional (3D) ultrasound panorama from multiple ultrasound volumes can provide a wide field of view for better clinical diagnosis. Registration of ultrasound volumes has been a key issue for the success of this panoramic process. In this paper, we propose a method to register and stitch ultrasound volumes, which are scanned by dedicated ultrasound probe, based on an improved 3D Scale Invariant Feature Transform (SIFT) algorithm. We propose methods to exclude artifacts from ultrasound images in order to improve the overall performance in 3D feature point extraction and matching. Our method has been validated on both phantom and clinical data sets of human liver. Experimental results show the effectiveness and stability of our approach, and the precision of our method is comparable to that of the position tracker based registration.

  19. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  20. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  1. A 3D high resolution ex vivo white matter atlas of the common squirrel monkey (Saimiri sciureus) based on diffusion tensor imaging.

    Science.gov (United States)

    Gao, Yurui; Parvathaneni, Prasanna; Schilling, Kurt G; Wang, Feng; Stepniewska, Iwona; Xu, Zhoubing; Choe, Ann S; Ding, Zhaohua; Gore, John C; Chen, Li Min; Landman, Bennett A; Anderson, Adam W

    2016-02-27

    Modern magnetic resonance imaging (MRI) brain atlases are high quality 3-D volumes with specific structures labeled in the volume. Atlases are essential in providing a common space for interpretation of results across studies, for anatomical education, and providing quantitative image-based navigation. Extensive work has been devoted to atlas construction for humans, macaque, and several non-primate species (e.g., rat). One notable gap in the literature is the common squirrel monkey - for which the primary published atlases date from the 1960's. The common squirrel monkey has been used extensively as surrogate for humans in biomedical studies, given its anatomical neuro-system similarities and practical considerations. This work describes the continued development of a multi-modal MRI atlas for the common squirrel monkey, for which a structural imaging space and gray matter parcels have been previously constructed. This study adds white matter tracts to the atlas. The new atlas includes 49 white matter (WM) tracts, defined using diffusion tensor imaging (DTI) in three animals and combines these data to define the anatomical locations of these tracks in a standardized coordinate system compatible with previous development. An anatomist reviewed the resulting tracts and the inter-animal reproducibility (i.e., the Dice index of each WM parcel across animals in common space) was assessed. The Dice indices range from 0.05 to 0.80 due to differences of local registration quality and the variation of WM tract position across individuals. However, the combined WM labels from the 3 animals represent the general locations of WM parcels, adding basic connectivity information to the atlas.

  2. A 3D high resolution ex vivo white matter atlas of the common squirrel monkey (saimiri sciureus) based on diffusion tensor imaging

    Science.gov (United States)

    Gao, Yurui; Parvathaneni, Prasanna; Schilling, Kurt G.; Wang, Feng; Stepniewska, Iwona; Xu, Zhoubing; Choe, Ann S.; Ding, Zhaohua; Gore, John C.; Chen, Li min; Landman, Bennett A.; Anderson, Adam W.

    2016-03-01

    Modern magnetic resonance imaging (MRI) brain atlases are high quality 3-D volumes with specific structures labeled in the volume. Atlases are essential in providing a common space for interpretation of results across studies, for anatomical education, and providing quantitative image-based navigation. Extensive work has been devoted to atlas construction for humans, macaque, and several non-primate species (e.g., rat). One notable gap in the literature is the common squirrel monkey - for which the primary published atlases date from the 1960's. The common squirrel monkey has been used extensively as surrogate for humans in biomedical studies, given its anatomical neuro-system similarities and practical considerations. This work describes the continued development of a multi-modal MRI atlas for the common squirrel monkey, for which a structural imaging space and gray matter parcels have been previously constructed. This study adds white matter tracts to the atlas. The new atlas includes 49 white matter (WM) tracts, defined using diffusion tensor imaging (DTI) in three animals and combines these data to define the anatomical locations of these tracks in a standardized coordinate system compatible with previous development. An anatomist reviewed the resulting tracts and the inter-animal reproducibility (i.e., the Dice index of each WM parcel across animals in common space) was assessed. The Dice indices range from 0.05 to 0.80 due to differences of local registration quality and the variation of WM tract position across individuals. However, the combined WM labels from the 3 animals represent the general locations of WM parcels, adding basic connectivity information to the atlas.

  3. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  4. Projection-slice theorem based 2D-3D registration

    Science.gov (United States)

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  5. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  6. 3D imaging of semiconductor components by discrete laminography

    Energy Technology Data Exchange (ETDEWEB)

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  7. 基于多视角三维扫描数据的图像配准%Image Registration Based on Multi View 3D Scanning Data

    Institute of Scientific and Technical Information of China (English)

    刘博文; 童立靖

    2016-01-01

    三维激光扫描是上世纪90年代发展起来的一门新兴技术。考虑到测量设备测量范围的限制和被测物体外形复杂性等,单次扫描无法获得完整的、精确的模型。从而需要从不同的视点对被测物体进行扫描。一些高精度的扫描仪除了可以获得三维点云数据,还可以获得一种散乱的纹理图像。本文根据纹理映射的原理和逆向工程的思维,将扫描仪得到的点云数据和纹理数据通过一系列步骤提取出图像信息,再利用仿射变换的原理计算出二维投影图像的信息,然后准确地提取出二维投影图像。最后,采用基于SURF特征点提取算法,对提取出的两幅二维投影图像进行了配准。%Three dimensional laser scanning is a new technology developed in the last century in 90s. Taking into account the limits of measurement equipment measurement range and the complexity of the measured object, single scan can not get the complete and accurate model. So it is needed to scan the measured object from different viewpoints. Some high precision scanner can not only get the 3D point cloud data, but also get a scattered texture image. This paper is based on the principle of texture mapping and reverse engineering thinking. Through a series of steps, the image in-formation is extracted from the point cloud data and texture data obtained by the scanner. Then, based on the principle of affine transformation, the information of 2D projection image is calculated. And the 2D projection image is extracted accurately. Finally, based on the SURF feature point extraction algorithm, two 2D projection images are registered.

  8. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    3-D blood flow quantification with high spatial and temporal resolution would strongly benefit clinical research on cardiovascular pathologies. Ultrasonic velocity techniques are known for their ability to measure blood flow with high precision at high spatial and temporal resolution. However......, current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI......) technique is extended to estimate the 3-D velocity components inside a volume at high temporal resolutions (

  9. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  10. 3D mapping from high resolution satellite images

    Science.gov (United States)

    Goulas, D.; Georgopoulos, A.; Sarakenos, A.; Paraschou, Ch.

    2013-08-01

    In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.

  11. Integration of 3D scale-based pseudo-enhancement correction and partial volume image segmentation for improving electronic colon cleansing in CT colonograpy.

    Science.gov (United States)

    Zhang, Hao; Li, Lihong; Zhu, Hongbin; Han, Hao; Song, Bowen; Liang, Zhengrong

    2014-01-01

    Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structures. However, the high-density contrast agents tend to introduce pseudo-enhancement (PE) effect on neighboring soft tissues and elevate their observed CT attenuation value toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) since the pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a 3D scale-based PE correction into our previous ECC pipeline based on the maximum a posteriori expectation-maximization partial volume (PV) segmentation. The newly proposed ECC scheme takes into account both the PE and PV effects that commonly appear in CTC images. We evaluated the new scheme on 40 patient CTC scans, both qualitatively through display of segmentation results, and quantitatively through radiologists' blind scoring (human observer) and computer-aided detection (CAD) of colon polyps (computer observer). Performance of the presented algorithm has shown consistent improvements over our previous ECC pipeline, especially for the detection of small polyps submerged in the contrast agents. The CAD results of polyp detection showed that 4 more submerged polyps were detected for our new ECC scheme over the previous one.

  12. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  13. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  14. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  15. 3-D model-based tracking for UAV indoor localization.

    Science.gov (United States)

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  16. 3D- VISUALIZATION BY RAYTRACING IMAGE SYNTHESIS ON GPU

    Directory of Open Access Journals (Sweden)

    Al-Oraiqat Anas M.

    2016-06-01

    Full Text Available This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU. The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 % is spent for transfer of data between the central processing unit and GPU for calculations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of calculations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization.

  17. Spectral ladar: towards active 3D multispectral imaging

    Science.gov (United States)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  18. The Design and Implementation of 3D Medical Image Reconstruction System Based on VTK and ITK%基于VTK和ITK的3D医学图像重建系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    刘鹰; 韩利凯

    2011-01-01

    三维图像重构是当前数字图像处理领域的一个热点,特别是其在医学图像处理中的应用.VascuView3D是一个基于VTK和ITK的3D医学图像重建系统,该系统实现了体绘制(VR)、表面绘制(SR)和多平面绘制(MPR)等3D视图,以及基于CLUT的三维灰度图像着色.%3D image reconstruction is an attractive Held generally in digital image processing techniques, especially in medical imaging. The design and implementation of a 3D medical image reconstruction system VascuView, which can be used to build 3D images from 2D image slice files produced by CT and MRI devices, is introduced. The volume rendering, surface rendering and Multi -Planar rendering are implemented and lots of the 3D operations such as coloring of 3D image based on CLUT can be performed with this software.

  19. PCaAnalyser: a 2D-image analysis based module for effective determination of prostate cancer progression in 3D culture.

    Directory of Open Access Journals (Sweden)

    Md Tamjidul Hoque

    Full Text Available Three-dimensional (3D in vitro cell based assays for Prostate Cancer (PCa research are rapidly becoming the preferred alternative to that of conventional 2D monolayer cultures. 3D assays more precisely mimic the microenvironment found in vivo, and thus are ideally suited to evaluate compounds and their suitability for progression in the drug discovery pipeline. To achieve the desired high throughput needed for most screening programs, automated quantification of 3D cultures is required. Towards this end, this paper reports on the development of a prototype analysis module for an automated high-content-analysis (HCA system, which allows for accurate and fast investigation of in vitro 3D cell culture models for PCa. The Java based program, which we have named PCaAnalyser, uses novel algorithms that allow accurate and rapid quantitation of protein expression in 3D cell culture. As currently configured, the PCaAnalyser can quantify a range of biological parameters including: nuclei-count, nuclei-spheroid membership prediction, various function based classification of peripheral and non-peripheral areas to measure expression of biomarkers and protein constituents known to be associated with PCa progression, as well as defining segregate cellular-objects effectively for a range of signal-to-noise ratios. In addition, PCaAnalyser architecture is highly flexible, operating as a single independent analysis, as well as in batch mode; essential for High-Throughput-Screening (HTS. Utilising the PCaAnalyser, accurate and rapid analysis in an automated high throughput manner is provided, and reproducible analysis of the distribution and intensity of well-established markers associated with PCa progression in a range of metastatic PCa cell-lines (DU145 and PC3 in a 3D model demonstrated.

  20. Air-touch interaction system for integral imaging 3D display

    Science.gov (United States)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  1. Direct 3D Painting with a Metaball-Based Paintbrush

    Institute of Scientific and Technical Information of China (English)

    WAN Huagen; JIN Xiaogang; BAO Hujun

    2000-01-01

    This paper presents a direct 3D painting algorithm for polygonal models in 3D object-space with a metaball-based paintbrush in virtual environment.The user is allowed to directly manipulate the parameters used to shade the surface of the 3D shape by applying the pigment to its surface with direct 3D manipulation through a 3D flying mouse.

  2. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Science.gov (United States)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  3. Integration of real-time 3D image acquisition and multiview 3D display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  4. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    Science.gov (United States)

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  5. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  6. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  7. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  8. NeuralNetwork Based 3D Surface Reconstruction

    CERN Document Server

    Joseph, Vincy

    2009-01-01

    This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D) surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  9. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    Science.gov (United States)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  10. 3D plant phenotyping in sunflower using architecture-based organ segmentation from 3D point clouds

    OpenAIRE

    Gélard, William; Burger, Philippe; Casadebaig, Pierre; Langlade, Nicolas; Debaeke, Philippe; Devy, Michel; Herbulot, Ariane

    2016-01-01

    International audience; This paper presents a 3D phenotyping method applied to sunflower, allowing to compute the leaf area of an isolated plant. This is a preliminary step towards the automated monitoring of leaf area and plant growth through the plant life cycle. First, a model-based segmentation method is applied to 3D data derived from RGB images acquired on sunflower plants grown in pots. The RGB image acquisitions are made all around the isolated plant with a single hand-held standard c...

  11. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    Science.gov (United States)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  12. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Nasehi Tehrani, J; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Guo, X [University of Texas at Dallas, Richardson, TX (United States); Yang, Y [The University of New Mexico, New Mexico, NM (United States)

    2014-06-01

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.

  13. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  14. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  15. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  16. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael. H. F.; Veldhuizen, Albert G.; van Ooijen, Peter. M. A.; Lubbers, Jaap; Burgerhof, Johannes G. M.; Sardjono, Tri A.; Verkerke, Gijbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  17. Extended gray level co-occurrence matrix computation for 3D image volume

    Science.gov (United States)

    Salih, Nurulazirah M.; Dewi, Dyah Ekashanti Octorina

    2017-02-01

    Gray Level Co-occurrence Matrix (GLCM) is one of the main techniques for texture analysis that has been widely used in many applications. Conventional GLCMs usually focus on two-dimensional (2D) image texture analysis only. However, a three-dimensional (3D) image volume requires specific texture analysis computation. In this paper, an extended 2D to 3D GLCM approach based on the concept of multiple 2D plane positions and pixel orientation directions in the 3D environment is proposed. The algorithm was implemented by breaking down the 3D image volume into 2D slices based on five different plane positions (coordinate axes and oblique axes) resulting in 13 independent directions, then calculating the GLCMs. The resulted GLCMs were averaged to obtain normalized values, then the 3D texture features were calculated. A preliminary examination was performed on a 3D image volume (64 x 64 x 64 voxels). Our analysis confirmed that the proposed technique is capable of extracting the 3D texture features from the extended GLCMs approach. It is a simple and comprehensive technique that can contribute to the 3D image analysis.

  18. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    Science.gov (United States)

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.

  19. Optical-CT imaging of complex 3D dose distributions

    Science.gov (United States)

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  20. Virtual reality 3D headset based on DMD light modulators

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  1. 3D Image Reconstruction from Compton camera data

    CERN Document Server

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  2. Combining Street View and Aerial Images to Create Photo-Realistic 3D City Models

    OpenAIRE

    Ivarsson, Caroline

    2014-01-01

    This thesis evaluates two different approaches of using panoramic street view images for creating more photo-realistic 3D city models comparing to 3D city models based on only aerial images. The thesis work has been carried out at Blom Sweden AB with the use of their software and data. The main purpose of this thesis work has been to investigate if street view images can aid in creating more photo-realistic 3D city models on street level through an automatic or semi-automatic approach. Two di...

  3. Dynamic Frames Based Generation of 3D Scenes and Applications

    OpenAIRE

    Kvesić, Anton; Radošević, Danijel; Orehovački, Tihomir

    2015-01-01

    Modern graphic/programming tools like Unity enables the possibility of creating 3D scenes as well as making 3D scene based program applications, including full physical model, motion, sounds, lightning effects etc. This paper deals with the usage of dynamic frames based generator in the automatic generation of 3D scene and related source code. The suggested model enables the possibility to specify features of the 3D scene in a form of textual specification, as well as exporting such features ...

  4. Synthesis of image sequences for Korean sign language using 3D shape model

    Science.gov (United States)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  5. Model-based optical metrology and visualization of 3-D complex objects

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-li; LI A-meng; ZHAO Xiao-bo; GAO Peng-dong; TIAN Jin-dong; PENG Xiang

    2007-01-01

    This letter addresses several key issues in the process of model-based optical metrology, including three dimensional (3D) sensing, calibration, registration and fusion of range images, geometric representation, and visualization of reconstructed 3D model by taking into account the shape measurement of 3D complex structures,and some experimental results are presented.

  6. Structured Light-Based 3D Reconstruction System for Plants.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  7. Structured Light-Based 3D Reconstruction System for Plants

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces and software algorithms (including the proposed 3D point cloud registration and plant feature measurement. This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  8. Image Appraisal for 2D and 3D Electromagnetic Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  9. Approaches for a 3D assessment of pavement evenness data based on 3D vehicle models

    Directory of Open Access Journals (Sweden)

    Andreas Ueckermann

    2015-04-01

    Full Text Available Pavements are 3D in their shape. They can be captured in three dimensions by modern road mapping equipment which allows for the assessment of pavement evenness in a more holistic way as opposed to current practice which divides into longitudinal and transversal evenness. It makes sense to use 3D vehicle models to simulate the effects of 3D surface data on certain functional criteria like pavement loading, cargo loading and driving comfort. In order to evaluate the three criteria mentioned two vehicle models have been created: a passenger car used to assess driving comfort and a truck-semitrailer submodel used to assess pavement and cargo loading. The vehicle models and their application to 3D surface data are presented. The results are well in line with existing single-track (planar models. Their advantage over existing 1D/2D models is demonstrated by the example of driving comfort evaluation. Existing “geometric” limit values for the assessment of longitudinal evenness in terms of the power spectral density could be used to establish corresponding limit values for the dynamic response, i.e. driving comfort, pavement loading and cargo loading. The limit values are well in line with existing limit values based on planar vehicle models. They can be used as guidelines for the proposal of future limit values. The investigations show that the use of 3D vehicle models is an appropriate and meaningful way of assessing 3D evenness data gathered by modern road mapping systems.

  10. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenges...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....

  11. Determining 3D flow fields via multi-camera light field imaging.

    Science.gov (United States)

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-03-06

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

  12. Discrete Method of Images for 3D Radio Propagation Modeling

    Science.gov (United States)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  13. 3D reconstruction of multiple stained histology images

    Directory of Open Access Journals (Sweden)

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  14. Face recognition based on matching of local features on 3D dynamic range sequences

    Science.gov (United States)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  15. 3D shape measurement with phase correlation based fringe projection

    Science.gov (United States)

    Kühmstedt, Peter; Munckelt, Christoph; Heinze, Matthias; Bräuer-Burchardt, Christian; Notni, Gunther

    2007-06-01

    Here we propose a method for 3D shape measurement by means of phase correlation based fringe projection in a stereo arrangement. The novelty in the approach is characterized by following features. Correlation between phase values of the images of two cameras is used for the co-ordinate calculation. This work stands in contrast to the sole usage of phase values (phasogrammetry) or classical triangulation (phase values and image co-ordinates - camera raster values) for the determination of the co-ordinates. The method's main advantage is the insensitivity of the 3D-coordinates from the absolute phase values. Thus it prevents errors in the determination of the co-ordinates and improves robustness in areas with interreflections artefacts and inhomogeneous regions of intensity. A technical advantage is the fact that the accuracy of the 3D co-ordinates does not depend on the projection resolution. Thus the achievable quality of the 3D co-ordinates can be selectively improved by the use of high quality camera lenses and can participate in improvements in modern camera technologies. The presented new solution of the stereo based fringe projection with phase correlation makes a flexible, errortolerant realization of measuring systems within different applications like quality control, rapid prototyping, design and CAD/CAM possible. In the paper the phase correlation method will be described in detail. Furthermore, different realizations will be shown, i.e. a mobile system for the measurement of large objects and an endoscopic like system for CAD/CAM in dental industry.

  16. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Science.gov (United States)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  17. Feature detection on 3D images of dental imprints

    Science.gov (United States)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  18. An adaptive 3-D discrete cosine transform coder for medical image compression.

    Science.gov (United States)

    Tai, S C; Wu, Y G; Lin, C W

    2000-09-01

    In this communication, a new three-dimensional (3-D) discrete cosine transform (DCT) coder for medical images is presented. In the proposed method, a segmentation technique based on the local energy magnitude is used to segment subblocks of the image into different energy levels. Then, those subblocks with the same energy level are gathered to form a 3-D cuboid. Finally, 3-D DCT is employed to compress the 3-D cuboid individually. Simulation results show that the reconstructed images achieve a bit rate lower than 0.25 bit per pixel even when the compression ratios are higher than 35. As compared with the results by JPEG and other strategies, it is found that the proposed method achieves better qualities of decoded images than by JPEG and the other strategies.

  19. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  20. 3D Lunar Terrain Reconstruction from Apollo Images

    Science.gov (United States)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  1. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Science.gov (United States)

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  2. A high-level 3D visualization API for Java and ImageJ

    Directory of Open Access Journals (Sweden)

    Longair Mark

    2010-05-01

    Full Text Available Abstract Background Current imaging methods such as Magnetic Resonance Imaging (MRI, Confocal microscopy, Electron Microscopy (EM or Selective Plane Illumination Microscopy (SPIM yield three-dimensional (3D data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  3. 3D-2D Medical Image Registration Based on Near Projective Invariance%基于投影近似不变性的3D-2D医学图像配准

    Institute of Scientific and Technical Information of China (English)

    秦中元; 牟轩沁

    2006-01-01

    研究了同一病人的MRA和DSA图像之间的3D-2D配准问题,提出了一种基于投影近似不变性的配准算法,即在同一视角下,同一病人的三维MRA血管骨架的投影与二维DSA的骨架保持一致.通过定义二者之间的代价函数,并利用牛顿迭代法可得到视角参数的最优解.针对临床采集的MRA和DSA数据进行了实验,取得了较为满意的结果.

  4. FELIX 3D display: an interactive tool for volumetric imaging

    Science.gov (United States)

    Langhans, Knut; Bahr, Detlef; Bezecny, Daniel; Homann, Dennis; Oltmann, Klaas; Oltmann, Krischan; Guill, Christian; Rieper, Elisabeth; Ardey, Goetz

    2002-05-01

    The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.

  5. A novel modeling method for manufacturing hearing aid using 3D medical images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyeong Gyun [Dept of Radiological Science, Far East University, Eumseong (Korea, Republic of)

    2016-06-15

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape.

  6. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  7. UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING

    Directory of Open Access Journals (Sweden)

    I. Sarakinou

    2016-06-01

    Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  8. High-Resolution Imaged-Based 3D Reconstruction Combined with X-Ray CT Data Enables Comprehensive Non-Destructive Documentation and Targeted Research of Astromaterials

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2014-01-01

    Providing web-based data of complex and sensitive astromaterials (including meteorites and lunar samples) in novel formats enhances existing preliminary examination data on these samples and supports targeted sample requests and analyses. We have developed and tested a rigorous protocol for collecting highly detailed imagery of meteorites and complex lunar samples in non-contaminating environments. These data are reduced to create interactive 3D models of the samples. We intend to provide these data as they are acquired on NASA's Astromaterials Acquisition and Curation website at http://curator.jsc.nasa.gov/.

  9. Effective incorporation of spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    Science.gov (United States)

    Zheng, Guoyan

    2008-01-01

    This paper addresses the problem of estimating the 3D rigid pose of a CT volume of an object from its 2D X-ray projections. We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on X-ray and CT datasets of a plastic phantom and a cadaveric spine segment.

  10. Statistical skull models from 3D X-ray images

    CERN Document Server

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  11. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Directory of Open Access Journals (Sweden)

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  12. Development of 3D microwave imaging reflectometry in LHD (invited).

    Science.gov (United States)

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  13. Fast vision-based catheter 3D reconstruction

    Science.gov (United States)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  14. 3D soft tissue imaging with a mobile C-arm.

    Science.gov (United States)

    Ritter, Dieter; Orman, Jasmina; Schmidgunst, Christian; Graumann, Rainer

    2007-03-01

    We introduce a clinical prototype for 3D soft tissue imaging to support surgical or interventional procedures based on a mobile C-arm. An overview of required methods and materials is followed by first clinical images of animals and human patients including dosimetry. The mobility and flexibility of 3D C-arms gives free access to the patient and therefore avoids relocation of the patient between imaging and surgical intervention. Image fusion with diagnostic data (MRI, CT, PET) is demonstrated and promising applications for brachytherapy, RFTT and others are discussed.

  15. Low cost 3D scanning process using digital image processing

    Science.gov (United States)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  16. The application of camera calibration in range-gated 3D imaging technology

    Science.gov (United States)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  17. 3D imaging of neutron tracks using confocal microscopy

    Science.gov (United States)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  18. 3D reconstruction based on spatial vanishing information

    Institute of Scientific and Technical Information of China (English)

    Yuan Shu; Zheng Tan

    2005-01-01

    An approach for the three-dimensional (3D) reconstruction of architectural scenes from two un-calibrated images is described in this paper. From two views of one architectural structure, three pairs of corresponding vanishing points of three major mutual orthogonal directions can be extracted. The simple but powerful constraints of parallelism and orthogonal lines in architectural scenes can be used to calibrate the cameras and to recover the 3D information of the structure. This approach is applied to the real images of architectural scenes, and a 3D model of a building in virtual reality modelling language (VRML) format is presented which illustrates the method with successful performance.

  19. 3D-2D ultrasound feature-based registration for navigated prostate biopsy: a feasibility study.

    Science.gov (United States)

    Selmi, Sonia Y; Promayon, Emmanuel; Troccaz, Jocelyne

    2016-08-01

    The aim of this paper is to describe a 3D-2D ultrasound feature-based registration method for navigated prostate biopsy and its first results obtained on patient data. A system combining a low-cost tracking system and a 3D-2D registration algorithm was designed. The proposed 3D-2D registration method combines geometric and image-based distances. After extracting features from ultrasound images, 3D and 2D features within a defined distance are matched using an intensity-based function. The results are encouraging and show acceptable errors with simulated transforms applied on ultrasound volumes from real patients.

  20. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  1. Effective incorporating spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    Science.gov (United States)

    Zheng, Guoyan

    2010-10-01

    This paper addresses the problem of estimating the 3D rigid poses of a CT volume of an object from its 2D X-ray projection(s). We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measures only take intensity values into account without considering spatial information and their robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experiments were conducted on datasets from two applications: (a) intra-operative patient pose estimation from a limited number (e.g. 2) of calibrated fluoroscopic images, and (b) post-operative cup orientation estimation from a single standard X-ray radiograph with/without gonadal shielding. The experiment on intra-operative patient pose estimation showed a mean target registration accuracy of 0.8mm and a capture range of 11.5mm, while the experiment on estimating the post-operative cup orientation from a single X-ray radiograph showed a mean accuracy below 2 degrees for both anteversion and inclination. More importantly, results from both experiments demonstrated that the newly derived similarity measures were robust to occlusions in the X-ray image(s).

  2. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Science.gov (United States)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  3. 3D CARS image reconstruction and pattern recognition on SHG images

    Science.gov (United States)

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

  4. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    Science.gov (United States)

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  5. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ruisong Ye

    2012-07-01

    Full Text Available An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency domain. Experimental results show that the stego-image looks visually identical to the original host one and the secret image can be effectively extracted upon image processing attacks, which demonstrates strong robustness against a variety of attacks.

  6. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  7. Fruit bruise detection based on 3D meshes and machine learning technologies

    Science.gov (United States)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  8. Experiments on terahertz 3D scanning microscopic imaging

    Science.gov (United States)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  9. Vertex-based diffusion for 3-D mesh denoising.

    Science.gov (United States)

    Zhang, Ying; Ben Hamza, A

    2007-04-01

    We present a vertex-based diffusion for 3-D mesh denoising by solving a nonlinear discrete partial differential equation. The core idea behind our proposed technique is to use geometric insight in helping construct an efficient and fast 3-D mesh smoothing strategy to fully preserve the geometric structure of the data. Illustrating experimental results demonstrate a much improved performance of the proposed approach in comparison with existing methods currently used in 3-D mesh smoothing.

  10. 3D-2D ultrasound feature-based registration for navigated prostate biopsy: A feasibility study

    OpenAIRE

    Selmi, Sonia,; Promayon, Emmanuel; Troccaz, Jocelyne

    2016-01-01

    International audience; The aim of this paper is to describe a 3D-2D ultrasound feature-based registration method for navigated prostate biopsy and its first results obtained on patient data. A system combining a low-cost tracking system and a 3D-2D registration algorithm was designed. The proposed 3D-2D registration method combines geometric and image-based distances. After extracting features from ultrasound images, 3D and 2D features within a defined distance are matched using an intensity...

  11. Deformation analysis of 3D tagged cardiac images using an optical flow method

    Directory of Open Access Journals (Sweden)

    Gorman Robert C

    2010-03-01

    Full Text Available Abstract Background This study proposes and validates a method of measuring 3D strain in myocardium using a 3D Cardiovascular Magnetic Resonance (CMR tissue-tagging sequence and a 3D optical flow method (OFM. Methods Initially, a 3D tag MR sequence was developed and the parameters of the sequence and 3D OFM were optimized using phantom images with simulated deformation. This method then was validated in-vivo and utilized to quantify normal sheep left ventricular functions. Results Optimizing imaging and OFM parameters in the phantom study produced sub-pixel root-mean square error (RMS between the estimated and known displacements in the x (RMSx = 0.62 pixels (0.43 mm, y (RMSy = 0.64 pixels (0.45 mm and z (RMSz = 0.68 pixels (1 mm direction, respectively. In-vivo validation demonstrated excellent correlation between the displacement measured by manually tracking tag intersections and that generated by 3D OFM (R ≥ 0.98. Technique performance was maintained even with 20% Gaussian noise added to the phantom images. Furthermore, 3D tracking of 3D cardiac motions resulted in a 51% decrease in in-plane tracking error as compared to 2D tracking. The in-vivo function studies showed that maximum wall thickening was greatest in the lateral wall, and increased from both apex and base towards the mid-ventricular region. Regional deformation patterns are in agreement with previous studies on LV function. Conclusion A novel method was developed to measure 3D LV wall deformation rapidly with high in-plane and through-plane resolution from one 3D cine acquisition.

  12. 3D city models completion by fusing lidar and image data

    Science.gov (United States)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  13. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Science.gov (United States)

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  14. Computer-based image analysis in radiological diagnostics and image-guided therapy 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    CERN Document Server

    Beier, J

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the softw...

  15. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Science.gov (United States)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  16. 2D-3D image registration in diagnostic and interventional X-Ray imaging

    NARCIS (Netherlands)

    Bom, I.M.J. van der

    2010-01-01

    Clinical procedures that are conventionally guided by 2D x-ray imaging, may benefit from the additional spatial information provided by 3D image data. For instance, guidance of minimally invasive procedures with CT or MRI data provides 3D spatial information and visualization of structures that are

  17. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Science.gov (United States)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  18. 2D-3D Medical Image Registration Based on GPU%基于GPU的2D-3D医学图像配准

    Institute of Scientific and Technical Information of China (English)

    党建武; 杭利华; 王阳萍; 杜晓刚

    2013-01-01

    In 2D-3D medical image registration, the DRR generation and the computation of the similarity measure between the DRR and the x-ray image are the most important and time-consuming registration procedures. Because of the problem of large amount of calculation in registration procedure, this paper combined the pattern intensity with gradient to simplify the calculation, and used GPU multithreading parallel computing to generate the DRR and comput the similarity on the GPU, introduced the gradient descent and multi-resolution strategies in the registration optimization procedure to complete the registration process. Compared with other similarity measures and registration on the CPU, this registration method ensures the registration precision and improves the registration speed.%在2D-3D医学图像配准中,数字影像重建(DRR)的生成与相似性测度是最重要同时也是最耗时的两个配准步骤.针对配准过程中计算量大、耗时长的问题,将模式强度与梯度相结合,简化模式强度相似性测度的计算量,同时利用图形处理器(GPU)的多线程并行计算在GPU上完成DRR生成与相似性测度,引入梯度下降与多分辨策略对配准过程进行优化,完成整个配准过程.通过与多种相似性测度方法以及基于CPU的配准的比较,表明该方法在很好地兼顾配准精确度的同时,配准速度得到了大大提高.

  19. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  20. 3D imaging and wavefront sensing with a plenoptic objective

    Science.gov (United States)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  1. 基于激光影像的物体三维点云获取系统%The acquisition System of 3D Point Cloud Based on Image With Laser

    Institute of Scientific and Technical Information of China (English)

    王震; 刘进

    2013-01-01

    三维点云获取系统能够快速地获取目标物体的几何信息,生成大量点云,将目标的真实三维形态在计算机中可视化的展现出来。本文提出了一种新的三维点云数据获取的方法,应用在自主开发的基于激光影像的物体三维点云获取系统中,即标定激光面映射目标表面点的一维坐标,利用单像摄影测量后方交会和一维坐标的联合解算,得出目标点三维伪坐标。通过坐标逆向旋转恢复,得到真实的三维坐标数据,据此完整地建立目标物体的三维可视化模型。%The acquisition system of 3D point cloud can get the geometric information fast and provide a lot of point cloud data in order to show the object 3D shape on a computer .This paper gives a new method of getting 3D point cloud data that is applied to the self-made system of object 3D point cloud acquisition based on image with laser .The calibra-ted laser area reflects the one dimension coordinates of object surface .Through the combined calculation of one dimen-sion coordinate and resection of single photogrammetry the original 3D coordinates of object points can be obtained . Through reverse rotation we can get the true 3D coordinates and construct the complete 3D visual model of object.

  2. A 3-D visualization method for image-guided brain surgery.

    Science.gov (United States)

    Bourbakis, N G; Awad, M

    2003-01-01

    This paper deals with a 3D methodology for brain tumor image-guided surgery. The methodology is based on development of a visualization process that mimics the human surgeon behavior and decision-making. In particular, it originally constructs a 3D representation of a tumor by using the segmented version of the 2D MRI images. Then it develops an optimal path for the tumor extraction based on minimizing the surgical effort and penetration area. A cost function, incorporated in this process, minimizes the damage surrounding healthy tissues taking into consideration the constraints of a new snake-like surgical tool proposed here. The tumor extraction method presented in this paper is compared with the ordinary method used on brain surgery, which is based on a straight-line based surgical tool. Illustrative examples based on real simulations present the advantages of the 3D methodology proposed here.

  3. 3D texture analysis in renal cell carcinoma tissue image grading.

    Science.gov (United States)

    Kim, Tae-Yun; Cho, Nam-Hoon; Jeong, Goo-Bo; Bengtsson, Ewert; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  4. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Directory of Open Access Journals (Sweden)

    Tae-Yun Kim

    2014-01-01

    Full Text Available One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  5. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    Science.gov (United States)

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  6. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    Science.gov (United States)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  7. NoSQL Based 3D City Model Management System

    Science.gov (United States)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  8. Structured light 3D tracking system for measuring motions in PET brain imaging

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Jørgensen, Morten Rudkjær; Paulsen, Rasmus Reinhold

    2010-01-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light...... with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure...

  9. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    Directory of Open Access Journals (Sweden)

    Karim Hammoudi

    2010-12-01

    Full Text Available This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.

  10. Needle placement for piriformis injection using 3-D imaging.

    Science.gov (United States)

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  11. 3D Printing of Carbon Nanotubes-Based Microsupercapacitors.

    Science.gov (United States)

    Yu, Wei; Zhou, Han; Li, Ben Q; Ding, Shujiang

    2017-02-08

    A novel 3D printing procedure is presented for fabricating carbon-nanotubes (CNTs)-based microsupercapacitors. The 3D printer uses a CNTs ink slurry with a moderate solid content and prints a stream of continuous droplets. Appropriate control of a heated base is applied to facilitate the solvent removal and adhesion between printed layers and to improve the structure integrity without structure delamination or distortion upon drying. The 3D-printed electrodes for microsupercapacitors are characterized by SEM, laser scanning confocal microscope, and step profiler. Effect of process parameters on 3D printing is also studied. The final solid-state microsupercapacitors are assembled with the printed multilayer CNTs structures and poly(vinyl alcohol)-H3PO4 gel as the interdigitated microelectrodes and electrolyte. The electrochemical performance of 3D printed microsupercapacitors is also tested, showing a significant areal capacitance and excellent cycle stability.

  12. The effect of activity outside the field of view on image quality for a 3D LSO-based whole body PET/CT scanner.

    Science.gov (United States)

    Matheoud, R; Secco, C; Della Monica, P; Leva, L; Sacchetti, G; Inglese, E; Brambilla, M

    2009-10-07

    The purpose of this study was to quantify the influence of outside field of view (FOV) activity concentration (A(c)(,out)) on the noise equivalent count rate (NECR), scatter fraction (SF) and image quality of a 3D LSO whole-body PET/CT scanner. The contrast-to-noise ratio (CNR) was the figure of merit used to characterize the image quality of PET scans. A modified International Electrotechnical Commission (IEC) phantom was used to obtain SF and counting rates similar to those found in average patients. A scatter phantom was positioned at the end of the modified IEC phantom to simulate an activity that extends beyond the scanner. The modified IEC phantom was filled with (18)F (11 kBq mL(-1)) and the spherical targets, with internal diameter (ID) ranging from 10 to 37 mm, had a target-to-background ratio of 10. PET images were acquired with background activity concentrations into the FOV (A(c)(,bkg)) about 11, 9.2, 6.6, 5.2 and 3.5 kBq mL(-1). The emission scan duration (ESD) was set to 1, 2, 3 and 4 min. The tube inside the scatter phantom was filled with activities to provide A(c)(,out) in the whole scatter phantom of zero, half, unity, twofold and fourfold the one of the modified IEC phantom. Plots of CNR versus the various parameters are provided. Multiple linear regression was employed to study the effects of A(c)(,out) on CNR, adjusted for the presence of variables (sphere ID, A(c)(,bkg) and ESD) related to CNR. The presence of outside FOV activity at the same concentration as the one inside the FOV reduces peak NECR of 30%. The increase in SF is marginal (1.2%). CNR diminishes significantly with increasing outside FOV activity, in the range explored. ESD and A(c)(,out) have a similar weight in accounting for CNR variance. Thus, an experimental law that adjusts the scan duration to the outside FOV activity can be devised. Recovery of CNR loss due to an elevated A(c)(,out) activity seems feasible by modulating the ESD in individual bed positions according to A(c)(,out).

  13. 3D Image Reconstruction from X-Ray Measurements with Overlap

    CERN Document Server

    Klodt, Maria

    2016-01-01

    3D image reconstruction from a set of X-ray projections is an important image reconstruction problem, with applications in medical imaging, industrial inspection and airport security. The innovation of X-ray emitter arrays allows for a novel type of X-ray scanners with multiple simultaneously emitting sources. However, two or more sources emitting at the same time can yield measurements from overlapping rays, imposing a new type of image reconstruction problem based on nonlinear constraints. Using traditional linear reconstruction methods, respective scanner geometries have to be implemented such that no rays overlap, which severely restricts the scanner design. We derive a new type of 3D image reconstruction model with nonlinear constraints, based on measurements with overlapping X-rays. Further, we show that the arising optimization problem is partially convex, and present an algorithm to solve it. Experiments show highly improved image reconstruction results from both simulated and real-world measurements.

  14. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  15. 基于S3C6410的3D图像构建系统研究%Research on 3D Image Constructing System Based on S3C6410

    Institute of Scientific and Technical Information of China (English)

    吴迪

    2016-01-01

    本文提出了一种基于单摄像头的3D图像构建系统的解决方案。系统硬件以ARM11处理器S3C6410为数据处理核心,并配备CMOS图像采集单元和红外传感单元;以嵌入式Linux操作系统作为系统软/硬件管理调度和协调控制中心,并采用ARM11处理器自带的3D硬件加速器和OpenGL ES软件图形开发库相结合的方式实现3D加速,构建了3D快速建模系统。%In this paper, we propose a solution of 3D image constructing system based on single camera. We use ARM11 proces-sor S3C6410 as the core of data processing which equipped with CMOS image capture unit and infrared sensor unit. In addition, we use Linux operation system as the core of management and coordination control platform for this system. By a combination of using 3D hardware accelerator in ARM11 processor and the graphics development library of OpenGL ES software it realizes 3D acceleration and constructs a rapid 3D modeling system.

  16. High resolution 3D imaging of synchrotron generated microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  17. FEMUR SHAPE RECOVERY FROM VOLUMETRIC IMAGES USING 3-D DEFORMABLE MODELS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new scheme for femur shape recovery from volumetric images using deformable models was proposed. First, prior 3-D deformable femur models are created as templates using point distribution models technology. Second, active contour models are employed to segment the magnetic resonance imaging (MRI) volumetric images of the tibial and femoral joints and the deformable models are initialized based on the segmentation results. Finally, the objective function is minimized to give the optimal results constraining the surface of shapes.

  18. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    Science.gov (United States)

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope.

  19. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    Science.gov (United States)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  20. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    Directory of Open Access Journals (Sweden)

    Scott Mark

    2005-03-01

    Full Text Available Abstract Background Many three-dimensional (3D images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  1. Atlas Based Automatic Liver 3D CT Image Segmentation%基于图谱的肝脏CT三维自动分割研究

    Institute of Scientific and Technical Information of China (English)

    刘伟; 贾富仓; 胡庆茂; 王俊

    2011-01-01

    目的 在肝脏外科手术或肝脏病理研究中,计算肝脏体积是重要步骤.由于肝脏外形复杂、临近组织灰度值与之接近等特点,肝脏的自动医学图像分割仍是医学图像处理中的难点之一.方法 本文采用图谱结合3D非刚性配准的方法,同时加入肝脏区域搜索算法,实现了鲁棒性较高的肝脏自动分割程序.首先,利用20套训练图像创建图谱,然后程序自动搜索肝脏区域,最后将图谱与待分割CT图像依次进行仿射配准和B样条配准.配准以后的图谱肝脏轮廓即可表示为目标肝脏分割轮廓,进而计算出肝脏体积.结果 评估结果显示,上述方法在肝脏体积误差方面表现出色,达到77分,但在局部(主要在肝脏尖端)出现较大的误差.结论 该方法分割临床肝脏CT图像具有可行性.%Objective Liver segmentation is an important step for the planning and navigation in liver surgery. Accurate, fast and robust automatic segmentation methods for clinical routine data are urgently needed. Because of the liver- s characteristics, such as the complexity of the external form, the similarity between the intensities of the liver and the tissues around it, automatic segmentation of the liver is one of the difficulties in medical image processing. Methods In this paper, 3D non-rigid registration from a refined atlas to liver CT images is used for segmentation. Firstly, twenty sets of training images are utilized to create an atlas. Then the liver initial region is searched and located automatically. After that threshold filtering is used to enhance the robustness of segmentation. Finally, this atlas is non-rigidly registered to the liver in CT images with affine and B-spline in succession. The registered segmentation of liver- s atlas represented the segmentation of the target liver, and then the liver volume was calculated. Results The evaluation show that the proposed method works well in liver volume error, with the 77 score

  2. 3D Surface-based Detection of Pleural Thickenings

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2011-01-01

    Full Text Available Pleuramesothelioma is a malignant tumor of the pleura. It evolves from pleural thickenings which are a typical long-term effect of asbestos exposure. A diagnosis is performed by examining CT scans acquired from the patient’s lung. The analysis of the image data is a very time-consuming task and is subject to strong inter- and intra-reader variances. To objectivize the analysis and to speed-up the diagnosis a full automatic system is developed. An essential step in this process is to identify potential thickenings. In this paper we describe the complete system in brief, and then take a closer look on thickening detection. A CT slice based approach is presented here. It is extended by using 3D knowledge of thelung surface which could scarcely have been acquired visually.

  3. 3D-DXA: Assessing the Femoral Shape, the Trabecular Macrostructure and the Cortex in 3D from DXA images.

    Science.gov (United States)

    Humbert, Ludovic; Martelli, Yves; Fonolla, Roger; Steghofer, Martin; Di Gregorio, Silvana; Malouf, Jorge; Romera, Jordi; Barquero, Luis Miguel Del Rio

    2017-01-01

    The 3D distribution of the cortical and trabecular bone mass in the proximal femur is a critical component in determining fracture resistance that is not taken into account in clinical routine Dual-energy X-ray Absorptiometry (DXA) examination. In this paper, a statistical shape and appearance model together with a 3D-2D registration approach are used to model the femoral shape and bone density distribution in 3D from an anteroposterior DXA projection. A model-based algorithm is subsequently used to segment the cortex and build a 3D map of the cortical thickness and density. Measurements characterising the geometry and density distribution were computed for various regions of interest in both cortical and trabecular compartments. Models and measurements provided by the "3D-DXA" software algorithm were evaluated using a database of 157 study subjects, by comparing 3D-DXA analyses (using DXA scanners from three manufacturers) with measurements performed by Quantitative Computed Tomography (QCT). The mean point-to-surface distance between 3D-DXA and QCT femoral shapes was 0.93 mm. The mean absolute error between cortical thickness and density estimates measured by 3D-DXA and QCT was 0.33 mm and 72 mg/cm(3). Correlation coefficients (R) between the 3D-DXA and QCT measurements were 0.86, 0.93, and 0.95 for the volumetric bone mineral density at the trabecular, cortical, and integral compartments respectively, and 0.91 for the mean cortical thickness. 3D-DXA provides a detailed analysis of the proximal femur, including a separate assessment of the cortical layer and trabecular macrostructure, which could potentially improve osteoporosis management while maintaining DXA as the standard routine modality.

  4. 3D Printed Graphene Based Energy Storage Devices.

    Science.gov (United States)

    Foster, Christopher W; Down, Michael P; Zhang, Yan; Ji, Xiaobo; Rowley-Neale, Samuel J; Smith, Graham C; Kelly, Peter J; Banks, Craig E

    2017-03-03

    3D printing technology provides a unique platform for rapid prototyping of numerous applications due to its ability to produce low cost 3D printed platforms. Herein, a graphene-based polylactic acid filament (graphene/PLA) has been 3D printed to fabricate a range of 3D disc electrode (3DE) configurations using a conventional RepRap fused deposition moulding (FDM) 3D printer, which requires no further modification/ex-situ curing step. To provide proof-of-concept, these 3D printed electrode architectures are characterised both electrochemically and physicochemically and are advantageously applied as freestanding anodes within Li-ion batteries and as solid-state supercapacitors. These freestanding anodes neglect the requirement for a current collector, thus offering a simplistic and cheaper alternative to traditional Li-ion based setups. Additionally, the ability of these devices' to electrochemically produce hydrogen via the hydrogen evolution reaction (HER) as an alternative to currently utilised platinum based electrodes (with in electrolysers) is also performed. The 3DE demonstrates an unexpectedly high catalytic activity towards the HER (-0.46 V vs. SCE) upon the 1000th cycle, such potential is the closest observed to the desired value of platinum at (-0.25 V vs. SCE). We subsequently suggest that 3D printing of graphene-based conductive filaments allows for the simple fabrication of energy storage devices with bespoke and conceptual designs to be realised.

  5. 3D Printed Graphene Based Energy Storage Devices

    Science.gov (United States)

    Foster, Christopher W.; Down, Michael P.; Zhang, Yan; Ji, Xiaobo; Rowley-Neale, Samuel J.; Smith, Graham C.; Kelly, Peter J.; Banks, Craig E.

    2017-03-01

    3D printing technology provides a unique platform for rapid prototyping of numerous applications due to its ability to produce low cost 3D printed platforms. Herein, a graphene-based polylactic acid filament (graphene/PLA) has been 3D printed to fabricate a range of 3D disc electrode (3DE) configurations using a conventional RepRap fused deposition moulding (FDM) 3D printer, which requires no further modification/ex-situ curing step. To provide proof-of-concept, these 3D printed electrode architectures are characterised both electrochemically and physicochemically and are advantageously applied as freestanding anodes within Li-ion batteries and as solid-state supercapacitors. These freestanding anodes neglect the requirement for a current collector, thus offering a simplistic and cheaper alternative to traditional Li-ion based setups. Additionally, the ability of these devices’ to electrochemically produce hydrogen via the hydrogen evolution reaction (HER) as an alternative to currently utilised platinum based electrodes (with in electrolysers) is also performed. The 3DE demonstrates an unexpectedly high catalytic activity towards the HER (‑0.46 V vs. SCE) upon the 1000th cycle, such potential is the closest observed to the desired value of platinum at (‑0.25 V vs. SCE). We subsequently suggest that 3D printing of graphene-based conductive filaments allows for the simple fabrication of energy storage devices with bespoke and conceptual designs to be realised.

  6. Estimation of regional myocardial mass at risk based on distal arterial lumen volume and length using 3D micro-CT images.

    Science.gov (United States)

    Le, Huy; Wong, Jerry T; Molloi, Sabee

    2008-09-01

    The determination of regional myocardial mass at risk distal to a coronary occlusion provides valuable prognostic information for a patient with coronary artery disease. The coronary arterial system follows a design rule which allows for the use of arterial branch length and lumen volume to estimate regional myocardial mass at risk. Image processing techniques, such as segmentation, skeletonization and arterial network tracking, are presented for extracting anatomical details of the coronary arterial system using micro-computed tomography (micro-CT). Moreover, a method of assigning tissue voxels to their corresponding arterial branches is presented to determine the dependent myocardial region. The proposed micro-CT technique was utilized to investigate the relationship between the sum of the distal coronary arterial branch lengths and volumes to the dependent regional myocardial mass using a polymer cast of a porcine heart. The correlations of the logarithm of the total distal arterial lengths (L) to the logarithm of the regional myocardial mass (M) for the left anterior descending (LAD), left circumflex (LCX) and right coronary (RCA) arteries were log(L)=0.73log(M)+0.09 (R=0.78), log(L)=0.82log(M)+0.05 (R=0.77) and log(L)=0.85log(M)+0.05 (R=0.87), respectively. The correlation of the logarithm of the total distal arterial lumen volumes (V) to the logarithm of the regional myocardial mass for the LAD, LCX and RCA were log(V)=0.93log(M)-1.65 (R=0.81), log(V)=1.02log(M)-1.79 (R=0.78) and log(V)=1.17log(M)-2.10 (R=0.82), respectively. These morphological relations did not change appreciably for diameter truncations of 600-1400microm. The results indicate that the image processing procedures successfully extracted information from a large 3D dataset of the coronary arterial tree to provide prognostic indications in the form of arterial tree parameters and anatomical area at risk.

  7. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Science.gov (United States)

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images.

  8. 3D Seismic Imaging over a Potential Collapse Structure

    Science.gov (United States)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  9. A virtually imaged defocused array (VIDA) for high-speed 3D microscopy.

    Science.gov (United States)

    Schonbrun, Ethan; Di Caprio, Giuseppe

    2016-10-01

    We report a method to capture a multifocus image stack based on recording multiple reflections generated by imaging through a custom etalon. The focus stack is collected in a single camera exposure and consequently the information needed for 3D reconstruction is recorded in the camera integration time, which is only 100 µs. We have used the VIDA microscope to temporally resolve the multi-lobed 3D morphology of neutrophil nuclei as they rotate and deform through a microfluidic constriction. In addition, we have constructed a 3D imaging flow cytometer and quantified the nuclear morphology of nearly a thousand white blood cells flowing at a velocity of 3 mm per second. The VIDA microscope is compact and simple to construct, intrinsically achromatic, and the field-of-view and stack number can be easily reconfigured without redesigning diffraction gratings and prisms.

  10. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Energy Technology Data Exchange (ETDEWEB)

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  11. Multiframe image point matching and 3-d surface reconstruction.

    Science.gov (United States)

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  12. 3D reconstruction of worn parts for flexible remanufacture based on robotic arc welding

    Institute of Scientific and Technical Information of China (English)

    Yin Ziqiang; Zhang Guangjun; Gao Hongming; Wu Lin

    2010-01-01

    3D reconstruction of worn parts is the foundation for remanufacturing system based on robotic arc welding,because it can provide 3D geometric information for robot task plan.In this investigation,a nocwl 3D reconstruction system based on linear structured light vision sensing is developed,This system hardware consists of a MTC368-CB CCD camera,a MLH-645laser projector and a DH-CG300 image grabbing card.This system software is developed to control the image data capture.In order to reconstruct the 3D geometric information from the captured image,a two steps rapid calibration algorithm is proposed.The 3D reconstruction experiment shows a satisfactory result.

  13. Preliminary clinical application of contrast-enhanced MR angiography using 3D time-resolved imaging of contrast kinetics(3D-TRICKS)

    Institute of Scientific and Technical Information of China (English)

    YANG Chun-shan; LIU Shi-yuan; XIAO Xiang-sheng; FENG Yun; LI Hui-min; XIAO Shan; GONG Wan-qing

    2007-01-01

    Objective: To introduce a new better contrast-enhanced MR angiographic method, named 3D time-resolved imaging of contrast kinetics (3D-TRICKS). Methods: TRICKS is a high temporal resolution (2-6 s) MR angiographic technique using a short TR(4 ms) and TE(1.5 ms), partial echo sampling, in which central part of k-space is updated more frequently than the peripheral part. TRICKS pre-contrast mask 3D images are firstly scanned, and then the bolus injecting of Gd-DTPA, 15-20 sequential 3D images are acquired. The reconstructed 3D images, subtraction of contrast 3D images with mask images, are conceptually similar to a catheter-based intra-arterial digital subtraction angiographic series(DSA). Thirty patients underwent contrast-enhanced MR angiography using 3D-TRICKS. Results: Totally 12 vertebral arteries were well displayed on TRICKS, in which 7 were normal, 1 demonstrated bilateral vertebral artery stenosis, 4 had unilateral vertebral artery stenosis and 1 was accompanied with the same lateral carotid artery bifurcation stenosis. Four cases of bilateral renal arteries were normal, 1 transplanted kidney artery showed as normal and 1 transplanted kidney artery showed stenosis. 2 cerebral arteries were normal, 1 had sagittal sinus thrombosis and 1 displayed intracranial arteriovenous malformation. 3 pulmonary arteries were normal, 1 showed pulmonary artery thrombosis and 1 revealed pulmonary sequestration's abnormal feeding artery and draining vein. One left lower limb fibrolipoma showed feeding artery. One displayed radial-ulnar artery artificial fistula stenosis. One revealed left antebrachium hemangioma. Conclusion: TRICKS can clearly delineate most body vascular system and reveal most vascular abnormality. It possesses convenience and high successful rate, which make it the first choice of displaying most vascular abnormality.

  14. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    Science.gov (United States)

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  15. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    Directory of Open Access Journals (Sweden)

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  16. Predicting 3D pose in partially overlapped X-ray images of knee prostheses using model-based Roentgen stereophotogrammetric analysis (RSA).

    Science.gov (United States)

    Hsu, Chi-Pin; Lin, Shang-Chih; Shih, Kao-Shang; Huang, Chang-Hung; Lee, Chian-Her

    2014-12-01

    After total knee replacement, the model-based Roentgen stereophotogrammetric analysis (RSA) technique has been used to monitor the status of prosthetic wear, misalignment, and even failure. However, the overlap of the prosthetic outlines inevitably increases errors in the estimation of prosthetic poses due to the limited amount of available outlines. In the literature, quite a few studies have investigated the problems induced by the overlapped outlines, and manual adjustment is still the mainstream. This study proposes two methods to automate the image processing of overlapped outlines prior to the pose registration of prosthetic models. The outline-separated method defines the intersected points and segments the overlapped outlines. The feature-recognized method uses the point and line features of the remaining outlines to initiate registration. Overlap percentage is defined as the ratio of overlapped to non-overlapped outlines. The simulated images with five overlapping percentages are used to evaluate the robustness and accuracy of the proposed methods. Compared with non-overlapped images, overlapped images reduce the number of outlines available for model-based RSA calculation. The maximum and root mean square errors for a prosthetic outline are 0.35 and 0.04 mm, respectively. The mean translation and rotation errors are 0.11 mm and 0.18°, respectively. The errors of the model-based RSA results are increased when the overlap percentage is beyond about 9%. In conclusion, both outline-separated and feature-recognized methods can be seamlessly integrated to automate the calculation of rough registration. This can significantly increase the clinical practicability of the model-based RSA technique.

  17. Anatomical references for tibial sagittal alignment in total knee arthroplasty: A comparison of three anatomical axes based on 3D reconstructed CT images

    Institute of Scientific and Technical Information of China (English)

    SHAO Jun-jie; Thomas Parker Vail; WANG Qiao-jie; SHEN Hao; CHEN Yun-su; WANG Qi; JIANG Yao

    2013-01-01

    Background This study was designed to analyze three tibial axis reference lines including the anterior tibial cortex (ATC) line,the fibular line (FL),and the anatomical axis of tibia (AAT) line,to determine which line most closely parallels the mechanical axis (MA) of the tibia in the sagittal plane.The clinical relevance of the study is that through finding a reliable landmark on the leg,a surgeon may minimize posterior tibial slope measurement errors thereby and improving the technique for assuring proper alignment of total knee arthroplasty.Methods The material for this study included CT scans of the tibia from 85 consecutive patients and 168 knees (78 without osteoarthritis (OA) and 90 knees with OA).Measurements of the angles between the tibial mechanical axis and each of three reference lines in the sagittal plane were carried out using 3D imaging software.Results Mean angles of 168 knees were as follows:aMT (3.96±0.85)°,aMF (0.70±0.58)°,and aMA (1.40±0.66)°,(aMT:an angle between MA and ATC,aMF:an angle between MA and FL,aMA:an angle between MA and AAT.All abovementioned angles were measured in the sagittal plane of tibia) and the aMF was significantly smaller than the others (P <0.0001).The mean value of the medial tibial slope angle vs.the MA was (9.19±3.97)°,and this was significantly larger than the mean lateral slope angle of (6.62±4.23)° (P <0.0001).The difference between aMF without OA and with OA was not statistically significant (P=0.5015) and the association between the aMT and aMA was strong (r=0.82,P <0.01).Conclusions FL was more closely parallel to the MA of tibia,and more showed less variation between OA and nonOA controls than ATC and AAT lines.Furthermore,the amount of posterior slope in medial plateau was greater than that in lateral plateau.The findings of this analysis suggest that when using the anterior tibial cortex line as is commonly done with extramedullary tibial resection guides,the tibial resection should be sloped

  18. Pattern of cerebral hyperperfusion in Alzheimer's disease and amnestic mild cognitive impairment using voxel-based analysis of 3D arterial spin-labeling imaging: initial experience

    Directory of Open Access Journals (Sweden)

    Ding B

    2014-03-01

    Full Text Available Bei Ding,1 Hua-wei Ling,1 Yong Zhang,2 Juan Huang,1 Huan Zhang,1 Tao Wang,3 Fu Hua Yan11Department of Radiology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, 2Applied Science Laboratory, GE Healthcare, 3Department of Gerontology, Shanghai Mental Health Center, Shanghai, People's Republic of ChinaPurpose: A three-dimensional (3D continuous pulse arterial spin labeling (ASL technique was used to investigate cerebral blood flow (CBF changes in patients with Alzheimer's disease (AD, amnestic mild cognitive impairment (aMCI, and age- and sex-matched healthy controls.Materials and methods: Three groups were recruited for comparison, 24 AD patients, 17 MCI patients, and 21 age- and sex-matched control subjects. Three-dimensional ASL scans covering the entire brain were acquired with a 3.0 T magnetic resonance scanner. Spatial processing was performed with statistical parametric mapping 8. A second-level one-way analysis of variance analysis (threshold at P<0.05 was performed on the preprocessed ASL data. An average whole-brain CBF for each subject was also included as group-level covariates for the perfusion data, to control for individual CBF variations.Results: Significantly increased CBF was detected in bilateral frontal lobes and right temporal subgyral regions in aMCI compared with controls. When comparing AD with aMCI, the major hyperperfusion regions were the right limbic lobe and basal ganglia regions, including the putamen, caudate, lentiform nucleus, and thalamus, and hypoperfusion was found in the left medial frontal lobe, parietal cortex, the right middle temporo-occipital lobe, and particularly, the left anterior cingulate gyrus. We also found decreased CBF in the bilateral temporo-parieto-occipital cortices and left limbic lobe in AD patients, relative to the control group. aMCI subjects showed decreased blood flow in the left occipital lobe, bilateral inferior temporal cortex, and right middle temporal cortex

  19. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Directory of Open Access Journals (Sweden)

    Tsap Leonid V

    2006-01-01

    Full Text Available The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  20. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Energy Technology Data Exchange (ETDEWEB)

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  1. Powder-based 3D printing for bone tissue engineering.

    Science.gov (United States)

    Brunello, G; Sivolella, S; Meneghello, R; Ferroni, L; Gardin, C; Piattelli, A; Zavan, B; Bressan, E

    2016-01-01

    Bone tissue engineered 3-D constructs customized to patient-specific needs are emerging as attractive biomimetic scaffolds to enhance bone cell and tissue growth and differentiation. The article outlines the features of the most common additive manufacturing technologies (3D printing, stereolithography, fused deposition modeling, and selective laser sintering) used to fabricate bone tissue engineering scaffolds. It concentrates, in particular, on the current state of knowledge concerning powder-based 3D printing, including a description of the properties of powders and binder solutions, the critical phases of scaffold manufacturing, and its applications in bone tissue engineering. Clinical aspects and future applications are also discussed.

  2. Color Constancy using 3D Scene Geometry Derived from a Single Image

    NARCIS (Netherlands)

    Elfiky, N.; Gevers, T.; Gijsenij, A.; Gonzàlez, J.

    2014-01-01

    The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g., gray-world and white patch assumption). In this paper, 3D geome

  3. 3D Prior Image Constrained Projection Completion for X-ray CT Metal Artifact Reduction

    NARCIS (Netherlands)

    Mehranian, Abolfazl; Ay, Mohammad Reza; Rahmim, Arman; Zaidi, Habib

    2013-01-01

    The presence of metallic implants in the body of patients undergoing X-ray computed tomography (CT) examinations often results insevere streaking artifacts that degrade image quality. In this work, we propose a new metal artifact reduction (MAR) algorithm for 2D fan-beam and 3D cone-beam CT based on

  4. 3D nonrigid medical image registration using a new information theoretic measure

    Science.gov (United States)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  5. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    Science.gov (United States)

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In thi