WorldWideScience

Sample records for 3d image based

  1. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  2. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    Yakang Dai; Jian Zheng; Yuetao Yang; Duojie Kuai; Xiaodong Yang

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume render...

  3. Image based 3D city modeling : Comparative study

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  4. 3D Motion Parameters Determination Based on Binocular Sequence Images

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  5. 3D Medical Image Segmentation Based on Rough Set Theory

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  6. Optical 3D watermark based digital image watermarking for telemedicine

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  7. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  8. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  9. Ultra-realistic 3-D imaging based on colour holography

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  10. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed met...

  11. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  12. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-06-01

    Full Text Available Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.

  13. Image-Based 3D Face Modeling System

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  14. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  15. Medical image analysis of 3D CT images based on extensions of Haralick texture features

    Tesař, Ludvík; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S.

    2008-01-01

    Roč. 32, č. 6 (2008), s. 513-520. ISSN 0895-6111 R&D Projects: GA AV ČR 1ET101050403; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * Gaussian mixture model * 3D image analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.192, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/tesar-medical image analysis of 3d ct image s based on extensions of haralick texture features.pdf

  16. Four-view stereoscopic imaging and display system for web-based 3D image communication

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  17. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    S. P. Singh; K. Jain; V. R. Mandla

    2014-01-01

    3D city model is a digital representation of the Earth's surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based m...

  18. 3D Image Sensor based on Parallax Motion

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  19. 3D Wavelet-based Fusion Techniques for Biomedical Imaging

    Rubio Guivernau, José Luis

    2012-01-01

    Hoy en día las técnicas de adquisición de imágenes tridimensionales son comunes en diversas áreas, pero cabe destacar la relevancia que han adquirido en el ámbito de la imagen biomédica, dentro del cual encontramos una amplia gama de técnicas como la microscopía confocal, microscopía de dos fotones, microscopía de fluorescencia mediante lámina de luz, resonancia magnética nuclear, tomografía por emisión de positrones, tomografía de coherencia óptica, ecografía 3D y un largo etcétera. Un denom...

  20. Heterodyne 3D ghost imaging

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  1. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  2. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  3. Study of bone implants based on 3D images

    Grau, S; Ayala Vallespí, M. Dolors; Tost Pardell, Daniela; Miño, N.; Muñoz, F.; González, A

    2005-01-01

    New medical input technologies together with computer graphics modelling and visualization software have opened a new track for biomedical sciences: the so-called in-silice experimentation, in which analysis and measurements are done on computer graphics models constructed on the basis of medical images, complementing the traditional in-vivo and in-vitro experimental methods. In this paper, we describe an in-silice experiment to evaluate bio-implants f...

  4. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  5. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  6. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  7. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    Li, X. W.; Kim, D. H.; Cho, S. J.; Kim, S. T.

    2013-01-01

    A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII) and linear-complemented maximum- length cellular automata (LC-MLCA) to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA) recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-...

  8. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    X. W. Li

    2013-08-01

    Full Text Available A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII and linear-complemented maximum- length cellular automata (LC-MLCA to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-MLCA algorithm. When decrypting the encrypted image, the 2-D EIA is recovered by the LC-MLCA. Using the computational integral imaging reconstruction (CIIR technique and a 3-D object is subsequently reconstructed on the output plane from the 2-D recovered EIA. Because the 2-D EIA is composed of a number of elemental images having their own perspectives of a 3-D image, even if the encrypted image is seriously harmed, the 3-D image can be successfully reconstructed only with partial data. To verify the usefulness of the proposed algorithm, we perform computational experiments and present the experimental results for various attacks. The experiments demonstrate that the proposed encryption method is valid and exhibits strong robustness and security.

  9. SEGMENTATION OF UAV-BASED IMAGES INCORPORATING 3D POINT CLOUD INFORMATION

    A. Vetrivel

    2015-03-01

    Full Text Available Numerous applications related to urban scene analysis demand automatic recognition of buildings and distinct sub-elements. For example, if LiDAR data is available, only 3D information could be leveraged for the segmentation. However, this poses several risks, for instance, the in-plane objects cannot be distinguished from their surroundings. On the other hand, if only image based segmentation is performed, the geometric features (e.g., normal orientation, planarity are not readily available. This renders the task of detecting the distinct sub-elements of the building with similar radiometric characteristic infeasible. In this paper the individual sub-elements of buildings are recognized through sub-segmentation of the building using geometric and radiometric characteristics jointly. 3D points generated from Unmanned Aerial Vehicle (UAV images are used for inferring the geometric characteristics of roofs and facades of the building. However, the image-based 3D points are noisy, error prone and often contain gaps. Hence the segmentation in 3D space is not appropriate. Therefore, we propose to perform segmentation in image space using geometric features from the 3D point cloud along with the radiometric features. The initial detection of buildings in 3D point cloud is followed by the segmentation in image space using the region growing approach by utilizing various radiometric and 3D point cloud features. The developed method was tested using two data sets obtained with UAV images with a ground resolution of around 1-2 cm. The developed method accurately segmented most of the building elements when compared to the plane-based segmentation using 3D point cloud alone.

  10. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  11. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  12. Midsagittal plane extraction from brain images based on 3D SIFT

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°. (paper)

  13. Single-pixel 3D imaging with time-based depth resolution

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  14. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  15. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  16. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  17. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  18. A web-based solution for 3D medical image visualization

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  19. Computer-Assisted Hepatocellular Carcinoma Ablation Planning Based on 3-D Ultrasound Imaging.

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Guan, Peishan; Li, Liu-Jun; Zheng, Rongqin

    2016-08-01

    To evaluate computer-assisted hepatocellular carcinoma (HCC) ablation planning based on 3-D ultrasound, 3-D ultrasound images of 60 HCC lesions from 58 patients were obtained and transferred to a research toolkit. Compared with virtual manual ablation planning (MAP), virtual computer-assisted ablation planning (CAP) consumed less time and needle insertion numbers and exhibited a higher rate of complete tumor coverage and lower rate of critical structure injury. In MAP, junior operators used less time, but had more critical structure injury than senior operators. For large lesions, CAP performed better than MAP. For lesions near critical structures, CAP resulted in better outcomes than MAP. Compared with MAP, CAP based on 3-D ultrasound imaging was more effective and achieved a higher rate of complete tumor coverage and a lower rate of critical structure injury; it is especially useful for junior operators and with large lesions, and lesions near critical structures. PMID:27126243

  20. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  1. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM. PMID:27277277

  2. 3D vector flow imaging

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  3. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  4. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  5. 3D structural analysis of proteins using electrostatic surfaces based on image segmentation

    Vlachakis, Dimitrios; Champeris Tsaniras, Spyridon; Tsiliki, Georgia; Megalooikonomou, Vasileios; Kossida, Sophia

    2016-01-01

    Herein, we present a novel strategy to analyse and characterize proteins using protein molecular electro-static surfaces. Our approach starts by calculating a series of distinct molecular surfaces for each protein that are subsequently flattened out, thus reducing 3D information noise. RGB images are appropriately scaled by means of standard image processing techniques whilst retaining the weight information of each protein’s molecular electrostatic surface. Then homogeneous areas in the protein surface are estimated based on unsupervised clustering of the 3D images, while performing similarity searches. This is a computationally fast approach, which efficiently highlights interesting structural areas among a group of proteins. Multiple protein electrostatic surfaces can be combined together and in conjunction with their processed images, they can provide the starting material for protein structural similarity and molecular docking experiments.

  6. Superimposing of virtual graphics and real image based on 3D CAD information

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  7. A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging

    Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.

    2015-03-01

    Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.

  8. Sample based 3D face reconstruction from a single frontal image by adaptive locally linear embedding

    ZHANG Jian; ZHUANG Yue-ting

    2007-01-01

    In this paper, we propose a highly automatic approach for 3D photorealistic face reconstruction from a single frontal image. The key point of our work is the implementation of adaptive manifold learning approach. Beforehand, an active appearance model (AAM) is trained for automatic feature extraction and adaptive locally linear embedding (ALLE) algorithm is utilized to reduce the dimensionality of the 3D database. Then, given an input frontal face image, the corresponding weights between 3D samples and the image are synthesized adaptively according to the AAM selected facial features. Finally, geometry reconstruction is achieved by linear weighted combination of adaptively selected samples. Radial basis function (RBF) is adopted to map facial texture from the frontal image to the reconstructed face geometry. The texture of invisible regions between the face and the ears is interpolated by sampling from the frontal image. This approach has several advantages: (1) Only a single frontal face image is needed for highly automatic face reconstruction; (2) Compared with former works, our reconstruction approach provides higher accuracy; (3) Constraint based RBF texture mapping provides natural appearance for reconstructed face.

  9. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  10. Model based 3D segmentation and OCT image undistortion of percutaneous implants.

    Müller, Oliver; Donner, Sabine; Klinder, Tobias; Dragon, Ralf; Bartsch, Ivonne; Witte, Frank; Krüger, Alexander; Heisterkamp, Alexander; Rosenhahn, Bodo

    2011-01-01

    Optical Coherence Tomography (OCT) is a noninvasive imaging technique which is used here for in vivo biocompatibility studies of percutaneous implants. A prerequisite for a morphometric analysis of the OCT images is the correction of optical distortions caused by the index of refraction in the tissue. We propose a fully automatic approach for 3D segmentation of percutaneous implants using Markov random fields. Refraction correction is done by using the subcutaneous implant base as a prior for model based estimation of the refractive index using a generalized Hough transform. Experiments show the competitiveness of our algorithm towards manual segmentations done by experts. PMID:22003731

  11. Modifications in SIFT-based 3D reconstruction from image sequence

    Wei, Zhenzhong; Ding, Boshen; Wang, Wei

    2014-11-01

    In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.

  12. A neural network based 3D/3D image registration quality evaluator for the head-and-neck patient setup in the absence of a ground truth

    Purpose: To develop a neural network based registration quality evaluator (RQE) that can identify unsuccessful 3D/3D image registrations for the head-and-neck patient setup in radiotherapy. Methods: A two-layer feed-forward neural network was used as a RQE to classify 3D/3D rigid registration solutions as successful or unsuccessful based on the features of the similarity surface near the point-of-solution. The supervised training and test data sets were generated by rigidly registering daily cone-beam CTs to the treatment planning fan-beam CTs of six patients with head-and-neck tumors. Two different similarity metrics (mutual information and mean-squared intensity difference) and two different types of image content (entire image versus bony landmarks) were used. The best solution for each registration pair was selected from 50 optimizing attempts that differed only by the initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error threshold to determine whether that solution was successful or not. The supervised training was then used to train the RQE. The performance of the RQE was evaluated using the test data set that consisted of registration results that were not used in training. Results: The RQE constructed using the mutual information had very good performance when tested using the test data sets, yielding the sensitivity, the specificity, the positive predictive value, and the negative predictive value in the ranges of 0.960-1.000, 0.993-1.000, 0.983-1.000, and 0.909-1.000, respectively. Adding a RQE into a conventional 3D/3D image registration system incurs only about 10%-20% increase of the overall processing time. Conclusions: The authors' patient study has demonstrated very good performance of the proposed RQE when used with the mutual information in identifying unsuccessful 3D/3D registrations for daily patient setup. The classifier had

  13. FUSION OF AIRBORNE AND TERRESTRIAL IMAGE-BASED 3D MODELLING FOR ROAD INFRASTRUCTURE MANAGEMENT – VISION AND FIRST EXPERIMENTS

    S. Nebiker; S. Cavegn; Eugster, H.; Laemmer, K.; J. Markram; Wagner, R.

    2012-01-01

    In this paper we present the vision and proof of concept of a seamless image-based 3d modelling approach fusing airborne and mobile terrestrial imagery. The proposed fusion relies on dense stereo matching for extracting 3d point clouds which – in combination with the original airborne and terrestrial stereo imagery – create a rich 3d geoinformation and 3d measuring space. For the seamless exploitation of this space we propose using a new virtual globe technology integrating the ai...

  14. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  15. 3D Chaotic Functions for Image Encryption

    Pawan N. Khade

    2012-05-01

    Full Text Available This paper proposes the chaotic encryption algorithm based on 3D logistic map, 3D Chebyshev map, and 3D, 2D Arnolds cat map for color image encryption. Here the 2D Arnolds cat map is used for image pixel scrambling and 3D Arnolds cat map is used for R, G, and B component substitution. 3D Chebyshev map is used for key generation and 3D logistic map is used for image scrambling. The use of 3D chaotic functions in the encryption algorithm provide more security by using the, shuffling and substitution to the encrypted image. The Chebyshev map is used for public key encryption and distribution of generated private keys.

  16. Quantitative analysis of the central-chest lymph nodes based on 3D MDCT image data

    Lu, Kongkuo; Bascom, Rebecca; Mahraj, Rickhesvar P. M.; Higgins, William E.

    2009-02-01

    Lung cancer is the leading cause of cancer death in the United States. In lung-cancer staging, central-chest lymph nodes and associated nodal stations, as observed in three-dimensional (3D) multidetector CT (MDCT) scans, play a vital role. However, little work has been done in relation to lymph nodes, based on MDCT data, due to the complicated phenomena that give rise to them. Using our custom computer-based system for 3D MDCT-based pulmonary lymph-node analysis, we conduct a detailed study of lymph nodes as depicted in 3D MDCT scans. In this work, the Mountain lymph-node stations are automatically defined by the system. These defined stations, in conjunction with our system's image processing and visualization tools, facilitate lymph-node detection, classification, and segmentation. An expert pulmonologist, chest radiologist, and trained technician verified the accuracy of the automatically defined stations and indicated observable lymph nodes. Next, using semi-automatic tools in our system, we defined all indicated nodes. Finally, we performed a global quantitative analysis of the characteristics of the observed nodes and stations. This study drew upon a database of 32 human MDCT chest scans. 320 Mountain-based stations (10 per scan) and 852 pulmonary lymph nodes were defined overall from this database. Based on the numerical results, over 90% of the automatically defined stations were deemed accurate. This paper also presents a detailed summary of central-chest lymph-node characteristics for the first time.

  17. Image-based Virtual Exhibit and Its Extension to 3D

    Ming-Min Zhang; Zhi-Geng Pan; Li-Feng Ren; Peng Wang

    2007-01-01

    In this paper we introduce an image-based virtual exhibition system especially for clothing product. It can provide a powerful material substitution function, which is very useful for customization clothing-built. A novel color substitution algorithm and two texture morphing methods are designed to ensure realistic substitution result. To extend it to 3D, we need to do the model reconstruction based on photos. Thus we present an improved method for modeling human body. It deforms a generic model with shape details extracted from pictures to generate a new model. Our method begins with model image generation followed by silhouette extraction and segmentation. Then it builds a mapping between pixels inside every pair of silhouette segments in the model image and in the picture. Our mapping algorithm is based on a slice space representation that conforms to the natural features of human body.

  18. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    HU; Hai-bo(

    2001-01-01

    [1]Maintz J B, Viergever M A. A survey of medical image registration[J]. Medical Image Analysis, 1998, 3(1):1~37.[2]Collignon A. Automated multi-modality image registration based on information theory[J]. Computational Imaging and vision, 1995, 3:263~274.[3]Eberl S, Braun M. Intra-and inter-modality registration of functional and anatomical clinical images[A]. Pham B, et al. eds. New Approaches in Medical Image Analysis, SPIE 3747[C].[s.l.]:[s.n.], 1999. 102~114.[4]Lau Y H, Braun M, Hutton B F. Non-rigid 3D image registration using regionally constrained matching and the correlation ratio[A]. Pernus F, et al.eds. Biomedical Image Registration, Proc Int Workshop[C]. Bled, Slovenia, 1999. 137~148.[5]Wells Ⅲ W M, Viola P, Atsumi H, et al. Multi-modal volume registration by maximization of mutual information[J]. Medical Image Analysis, 1996, 1(1):35~51.[6]Feldmar J, Ayache N. Rigid, affine and locally affine registration of free-form surfaces[J]. Int J of Computer Vision, 1996, 23(3):97~104.

  19. Combination of intensity-based image registration with 3D simulation in radiation therapy

    Li, Pan; Malsch, Urban; Bendl, Rolf

    2008-09-01

    Modern techniques of radiotherapy like intensity modulated radiation therapy (IMRT) make it possible to deliver high dose to tumors of different irregular shapes at the same time sparing surrounding healthy tissue. However, internal tumor motion makes precise calculation of the delivered dose distribution challenging. This makes analysis of tumor motion necessary. One way to describe target motion is using image registration. Many registration methods have already been developed previously. However, most of them belong either to geometric approaches or to intensity approaches. Methods which take account of anatomical information and results of intensity matching can greatly improve the results of image registration. Based on this idea, a combined method of image registration followed by 3D modeling and simulation was introduced in this project. Experiments were carried out for five patients 4DCT lung datasets. In the 3D simulation, models obtained from images of end-exhalation were deformed to the state of end-inhalation. Diaphragm motions were around -25 mm in the cranial-caudal (CC) direction. To verify the quality of our new method, displacements of landmarks were calculated and compared with measurements in the CT images. Improvement of accuracy after simulations has been shown compared to the results obtained only by intensity-based image registration. The average improvement was 0.97 mm. The average Euclidean error of the combined method was around 3.77 mm. Unrealistic motions such as curl-shaped deformations in the results of image registration were corrected. The combined method required less than 30 min. Our method provides information about the deformation of the target volume, which we need for dose optimization and target definition in our planning system.

  20. 3D nanostructure reconstruction based on the SEM imaging principle, and applications

    This paper addresses a novel 3D reconstruction method for nanostructures based on the scanning electron microscopy (SEM) imaging principle. In this method, the shape from shading (SFS) technique is employed, to analyze the gray-scale information of a single top-view SEM image which contains all the visible surface information, and finally to reconstruct the 3D surface morphology. It offers not only unobstructed observation from various angles but also the exact physical dimensions of nanostructures. A convenient and commercially available tool (NanoViewer) is developed based on this method for nanostructure analysis and characterization of properties. The reconstruction result coincides well with the SEM nanostructure image and is verified in different ways. With the extracted structure information, subsequent research of the nanostructure can be carried out, such as roughness analysis, optimizing properties by structure improvement and performance simulation with a reconstruction model. Efficient, practical and non-destructive, the method will become a powerful tool for nanostructure surface observation and characterization. (paper)

  1. 3D Reconstruction of NMR Images

    Peter Izak; Milan Smetana; Libor Hargas; Miroslav Hrianka; Pavol Spanik

    2007-01-01

    This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  2. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  3. Web-based interactive 2D/3D medical image processing and visualization software.

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. PMID:20022133

  4. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Xing Zhao; Jing-jing Hu; Peng Zhang

    2009-01-01

    Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs) has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed...

  5. 3D Imager and Method for 3D imaging

    Kumar, P.; Staszewski, R.; Charbon, E.

    2013-01-01

    3D imager comprising at least one pixel, each pixel comprising a photodetectorfor detecting photon incidence and a time-to-digital converter system configured for referencing said photon incidence to a reference clock, and further comprising a reference clock generator provided for generating the re

  6. Improving low-dose cardiac CT images using 3D sparse representation based processing

    Shi, Luyao; Chen, Yang; Luo, Limin

    2015-03-01

    Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.

  7. Method for 3D Image Representation with Reducing the Number of Frames based on Characteristics of Human Eyes

    Kohei Arai

    2016-10-01

    Full Text Available Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency. Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6. Also, it can represent the 3D depth through visual perceptions. Thus, real time volume rendering can be done with the proposed method.

  8. 3D ultrafast ultrasound imaging in vivo

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability. (fast track communication)

  9. Micro-lens array based 3-D color image encryption using the combination of gravity model and Arnold transform

    You, Suping; Lu, Yucheng; Zhang, Wei; Yang, Bo; Peng, Runling; Zhuang, Songlin

    2015-11-01

    This paper proposes a 3-D image encryption scheme based on micro-lens array. The 3-D image can be reconstructed by applying the digital refocusing algorithm to the picked-up light field. To improve the security of the cryptosystem, the Arnold transform and the Gravity Model based image encryption method are employed. Experiment results demonstrate the high security in key space of the proposed encryption scheme. The results also indicate that the employment of light field imaging significant strengthens the robustness of the cipher image against some conventional image processing attacks.

  10. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  11. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. (paper)

  12. Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences

    Robert Niese

    2010-10-01

    Full Text Available Normal 0 21 false false false DE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In modern human computer interaction systems, emotion recognition from video is becoming an imperative feature. In this work we propose a new method for automatic recognition of facial expressions related to categories of basic emotions from image data. Our method incorporates a series of image processing, low level 3D computer vision and pattern recognition techniques. For image feature extraction, color and gradient information is used. Further, in terms of 3D processing, camera models are applied along with an initial registration step, in which person specific face models are automatically built from stereo. Based on these face models, geometric feature measures are computed and normalized using photogrammetric techniques. For recognition this normalization leads to minimal mixing between different emotion classes, which are determined with an artificial neural network classifier. Our framework achieves robust and superior classification results, also across a variety of head poses with resulting perspective foreshortening and changing face size. Results are presented for domestic and publicly available databases.

  13. EDGE BASED 3D INDOOR CORRIDOR MODELING USING A SINGLE IMAGE

    A. Baligh Jahromi

    2015-08-01

    Full Text Available Reconstruction of spatial layout of indoor scenes from a single image is inherently an ambiguous problem. However, indoor scenes are usually comprised of orthogonal planes. The regularity of planar configuration (scene layout is often recognizable, which provides valuable information for understanding the indoor scenes. Most of the current methods define the scene layout as a single cubic primitive. This domain-specific knowledge is often not valid in many indoors where multiple corridors are linked each other. In this paper, we aim to address this problem by hypothesizing-verifying multiple cubic primitives representing the indoor scene layout. This method utilizes middle-level perceptual organization, and relies on finding the ground-wall and ceiling-wall boundaries using detected line segments and the orthogonal vanishing points. A comprehensive interpretation of these edge relations is often hindered due to shadows and occlusions. To handle this problem, the proposed method introduces virtual rays which aid in the creation of a physically valid cubic structure by using orthogonal vanishing points. The straight line segments are extracted from the single image and the orthogonal vanishing points are estimated by employing the RANSAC approach. Many scene layout hypotheses are created through intersecting random line segments and virtual rays of vanishing points. The created hypotheses are evaluated by a geometric reasoning-based objective function to find the best fitting hypothesis to the image. The best model hypothesis offered with the highest score is then converted to a 3D model. The proposed method is fully automatic and no human intervention is necessary to obtain an approximate 3D reconstruction.

  14. FEA Based on 3D Micro-CT Images of Mesoporous Engineered Hydrogels

    L. Siad

    2015-12-01

    Full Text Available The objective of this computational study was to propose a rapid procedure in obtaining an estimation of elastic moduli of solid phases of porous natural-polymeric biomaterials used for bone tissue engineering. This procedure was based on the comparison of experimental results to finite element (FE responses of parallelepiped so-called representative volume elements (rev of the material at hand. To address this issue a series of quasi-static unconfined compression tests were designed and performed on three prepared cylindrical biopolymer samples. Subsequently, a computed tomography scan was performed on fabricated specimens and two 3D images were reconstructed. Various parallelepiped revs of different sizes and located at distinct places within both constructs were isolated and then analyzed under unconfined compressive loads using FE modelling. In this preliminary study, for the sake of simplicity, the dried biopolymer solid is assumed to be linear elastic.

  15. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  16. Automated 3D-Objectdocumentation on the Base of an Image Set

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  17. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  18. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  19. Highly-Automatic MI Based Multiple 2D/3D Image Registration Using Self-initialized Geodesic Feature Correspondences

    Zheng, Hongwei; Cleju, Ioan; Saupe, Dietmar

    2010-01-01

    Intensity based registration methods, such as the mutual information (MI), do not commonly consider the spatial geometric information and the initial correspondences are uncertainty. In this paper, we present a novel approach for achieving highly-automatic 2D/3D image registration integrating the advantages from both entropy MI and spatial geometric features correspondence methods. Inspired by the scale space theory, we project the surfaces on a 3D model to 2D normal image spaces provided tha...

  20. 3D Reconstruction of NMR Images

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  1. MRI Sequence Images Compression Method Based on Improved 3D SPIHT%基于改进3D SPIHT的MRI序列图像压缩方法

    蒋行国; 李丹; 陈真诚

    2013-01-01

    目的 研究一种有效的MRI序列图像压缩方法.方法 以2组不同数量、不同层厚的MRI序列图像为例,针对3D SPIHT算法运算复杂度,在对D型、L型表项重复判断的不足上,提出了一种改进的3DSPIHT方法;同时,根据MRI序列图像的相关性特点,提出了分组编/解码的方法,结合3D小波变换和应用改进的3D SPIHT方法,实现了MRI序列图像压缩.结果 分组结合改进3D SPIHT方法与2DSPIHT,3D SPIHT相比,能够得到较好重构图像,同时,峰值信噪比(PSNR)提高了1~8 dB左右.结论 在相同码率下,分组结合改进3D SPIHT的方法提高了PSNR和图像恢复质量,可以更好地解决大量MRI序列图像存储与传输问题.%Objective To propose an effective MRI sequence image compression method for solving the storage and transmission problem of large amounts of MRI sequence images. Methods Aimed at alleviating the complexity of computation of 3D Set Partitioning in Hierarchical Trees( SPIHT) algorithm and the deficiency that D or L type table were judged repeatedly, an improved 3 D SPIHT method was presented and two groups of MRI sequence images with different numbers and slice thickness were taken as examples. At the same time, according to the correlation characteristics of MRI sequence images, a method that images were divided into groups and then coded/decoded was put forward in this paper. It combined with 3D wavelet transform and the improved 3D SPIHT method, the MRI sequence image compression was achieved. Results Comparing with the 2D SPIHT and 3D SPIHT methods, the grouping combined with the improved 3D SPIHT method could obtain better reconstructed images and Peak Signal Noise Ratio (PSNR) could be improved by 1 ~ 8 dB as well. Conclusion At the same bit rate, PSNR and image quality of recovery can be improved by the grouping combined with the improved 3D SPIHT method and the storage and transmission problem of large amounts of MRI sequence images can be solved.

  2. REGION-BASED 3D SURFACE RECONSTRUCTION USING IMAGES ACQUIRED BY LOW-COST UNMANNED AERIAL SYSTEMS

    Z. Lari

    2015-08-01

    Full Text Available Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  3. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  4. Quantitative wound healing measurement and monitoring system based on an innovative 3D imaging system

    Yi, Steven; Yang, Arthur; Yin, Gongjie; Wen, James

    2011-03-01

    In this paper, we report a novel three-dimensional (3D) wound imaging system (hardware and software) under development at Technest Inc. System design is aimed to perform accurate 3D measurement and modeling of a wound and track its healing status over time. Accurate measurement and tracking of wound healing enables physicians to assess, document, improve, and individualize the treatment plan given to each wound patient. In current wound care practices, physicians often visually inspect or roughly measure the wound to evaluate the healing status. This is not an optimal practice since human vision lacks precision and consistency. In addition, quantifying slow or subtle changes through perception is very difficult. As a result, an instrument that quantifies both skin color and geometric shape variations would be particularly useful in helping clinicians to assess healing status and judge the effect of hyperemia, hematoma, local inflammation, secondary infection, and tissue necrosis. Once fully developed, our 3D imaging system will have several unique advantages over traditional methods for monitoring wound care: (a) Non-contact measurement; (b) Fast and easy to use; (c) up to 50 micron measurement accuracy; (d) 2D/3D Quantitative measurements;(e) A handheld device; and (f) Reasonable cost (< $1,000).

  5. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automati...

  6. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms. PMID:26186775

  7. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Xing Zhao

    2009-01-01

    Full Text Available Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed in this paper. This method divides both projection data and reconstructed image volume into subsets according to geometric symmetries in circular cone-beam projection layout, and a fast reconstruction for large data volume can be implemented by packing the subsets of projection data into the RGBA channels of GPU, performing the reconstruction chunk by chunk and combining the individual results in the end. The method is evaluated by reconstructing 3D images from computer-simulation data and real micro-CT data. Our results indicate that the GPU implementation can maintain original precision and speed up the reconstruction process by 110–120 times for circular cone-beam scan, as compared to traditional CPU implementation.

  8. Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine.

    Lee, S; Lee, J; Lee, A; Park, N; Lee, S; Song, S; Seo, A; Lee, H; Kim, J-I; Eom, K

    2013-05-01

    Augmented reality (AR) is a technology which enables users to see the real world, with virtual objects superimposed upon or composited with it. AR simulators have been developed and used in human medicine, but not in veterinary medicine. The aim of this study was to develop an AR intravenous (IV) injection simulator to train veterinary and pre-veterinary students to perform canine venipuncture. Computed tomographic (CT) images of a beagle dog were scanned using a 64-channel multidetector. The CT images were transformed into volumetric data sets using an image segmentation method and were converted into a stereolithography format for creating 3D models. An AR-based interface was developed for an AR simulator for IV injection. Veterinary and pre-veterinary student volunteers were randomly assigned to an AR-trained group or a control group trained using more traditional methods (n = 20/group; n = 8 pre-veterinary students and n = 12 veterinary students in each group) and their proficiency at IV injection technique in live dogs was assessed after training was completed. Students were also asked to complete a questionnaire which was administered after using the simulator. The group that was trained using an AR simulator were more proficient at IV injection technique using real dogs than the control group (P ≤ 0.01). The students agreed that they learned the IV injection technique through the AR simulator. Although the system used in this study needs to be modified before it can be adopted for veterinary educational use, AR simulation has been shown to be a very effective tool for training medical personnel. Using the technology reported here, veterinary AR simulators could be developed for future use in veterinary education. PMID:23103217

  9. Advanced 3-D Ultrasound Imaging

    Rasmussen, Morten Fischer

    been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... and removes the need to integrate custom made electronics into the probe. A downside of row-column addressing 2-D arrays is the creation of secondary temporal lobes, or ghost echoes, in the point spread function. In the second part of the scientific contributions, row-column addressing of 2-D arrays...... was investigated. An analysis of how the ghost echoes can be attenuated was presented.Attenuating the ghost echoes were shown to be achieved by minimizing the first derivative of the apodization function. In the literature, a circular symmetric apodization function was proposed. A new apodization layout...

  10. Computer-aided interactive surgical simulation for craniofacial anomalies based on 3-D surface reconstruction CT images

    We developed a computer-aided interactive surgical simulation system for craniofacial anomalies based on three-dimensional (3-D) surface reconstruction CT imaging. This system has four functions: 1) 3-D surface reconstruction display with an accelerated projection method; 2) Surgical simulation to cut, move, rotate, and reverse bone-blocks over the reference 3-D image on the CRT screen; 3) 3-D display of the simulated image in arbitrary views; and 4) Prediction of postoperative skin surface features displayed as 3-D images in arbitrary views. Retrospective surgical simulation has been performed on three patients who underwent the fronto-orbital advancement procedures for brachycephaly and two who underwent the reconstructive procedure for scaphocephaly. The predicted configurations of the cranium and skin surface were well simulated when compared to the postoperative images in 3-D arbitrary views. In practical use, this software might be used for an on-line system connected to a large scale general-purpose computer. (author)

  11. 3D medical image segmentation based on a continuous modelling of the volume

    Several medical imaging/techniques, including Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) provide 3D information of the human body by means of a stack of parallel cross-sectional images. But a more sophisticated edge detection step has to be performed when the object under study is not well defined by its characteristic density or when an analytical knowledge of the surface of the object is useful for later processings. A new method for medical image segmentation has been developed: it uses the stability and differentiability properties of a continuous modelling of the 3D data. The idea is to build a system of Ordinary Differential Equations which the stable manifold is the surface of the object we are looking for. This technique has been applied to classical edge detection operators: threshold following, laplacian, gradient maximum in its direction. It can be used in 2D as well as in 3D and has been extended to seek particular points of the surface, such as local extrema. The major advantages of this method are as follows: the segmentation and boundary following steps are performed simultaneously, an analytical representation of the surface is obtained straightforwardly and complex objects in which branching problems may occur can be described automatically. Simulations on noisy synthetic images have induced a quantization step to test the sensitiveness to noise of our method with respect to each operator, and to study the influence of all the parameters. Last, this method has been applied to numerous real clinical exams: skull or femur images provided by CT, MR images of a cerebral tumor and of the ventricular system. These results show the reliability and the efficiency of this new method of segmentation

  12. 3D resolution enhancement of deep-tissue imaging based on virtual spatial overlap modulation microscopy.

    Su, I-Cheng; Hsu, Kuo-Jen; Shen, Po-Ting; Lin, Yen-Yin; Chu, Shi-Wei

    2016-07-25

    During the last decades, several resolution enhancement methods for optical microscopy beyond diffraction limit have been developed. Nevertheless, those hardware-based techniques typically require strong illumination, and fail to improve resolution in deep tissue. Here we develop a high-speed computational approach, three-dimensional virtual spatial overlap modulation microscopy (3D-vSPOM), which immediately solves the strong-illumination issue. By amplifying only the spatial frequency component corresponding to the un-scattered point-spread-function at focus, plus 3D nonlinear value selection, 3D-vSPOM shows significant resolution enhancement in deep tissue. Since no iteration is required, 3D-vSPOM is much faster than iterative deconvolution. Compared to non-iterative deconvolution, 3D-vSPOM does not need a priori information of point-spread-function at deep tissue, and provides much better resolution enhancement plus greatly improved noise-immune response. This method is ready to be amalgamated with two-photon microscopy or other laser scanning microscopy to enhance deep-tissue resolution. PMID:27464077

  13. A joint multi-view plus depth image coding scheme based on 3D-warping

    Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone;

    2011-01-01

    scene structure that can be effectively exploited to improve the performance of multi-view coding schemes. In this paper we introduce a novel coding architecture that replaces the inter-view motion prediction operation with a 3D warping approach based on depth information to improve the coding...

  14. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  15. MMW and THz images denoising based on adaptive CBM3D

    Dai, Li; Zhang, Yousai; Li, Yuanjiang; Wang, Haoxiang

    2014-04-01

    Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.

  16. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    Highlights: ► We revised the DBSCAN algorithm for segmentation and clustering of large 3D image dataset and classified multivariate image. ► The algorithm takes into account the coordinate system of the image data to improve the computational performance. ► The algorithm solved the instability problem in boundaries detection of the original DBSCAN. ► The segmentation results were successfully validated with synthetic 3D image and 3D XMT image of a pharmaceutical powder. - Abstract: Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image.

  17. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    Tran, Thanh N., E-mail: thanh.tran@merck.com [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Nguyen, Thanh T.; Willemsz, Tofan A. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Pharmaceutical Sciences and Clinical Supplies, Merck MSD, PO Box 20, 5340 BH Oss (Netherlands); Kessel, Gijs van [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Frijlink, Henderik W. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Voort Maarschalk, Kees van der [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Competence Center Process Technology, Purac Biochem, Gorinchem (Netherlands)

    2012-05-06

    Highlights: Black-Right-Pointing-Pointer We revised the DBSCAN algorithm for segmentation and clustering of large 3D image dataset and classified multivariate image. Black-Right-Pointing-Pointer The algorithm takes into account the coordinate system of the image data to improve the computational performance. Black-Right-Pointing-Pointer The algorithm solved the instability problem in boundaries detection of the original DBSCAN. Black-Right-Pointing-Pointer The segmentation results were successfully validated with synthetic 3D image and 3D XMT image of a pharmaceutical powder. - Abstract: Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image.

  18. Crowdsourcing Based 3d Modeling

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  19. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate th...

  20. Study of CT-based positron range correction in high resolution 3D PET imaging

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  1. Study of CT-based positron range correction in high resolution 3D PET imaging

    Cal-Gonzalez, J., E-mail: jacobo@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  2. Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

    Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.

  3. Image encryption schemes for JPEG and GIF formats based on 3D baker with compound chaotic sequence generator

    Ji, Shiyu; Tong, Xiaojun; Zhang, Miao

    2012-01-01

    This paper proposed several methods to transplant the compound chaotic image encryption scheme with permutation based on 3D baker into image formats as Joint Photographic Experts Group (JPEG) and Graphics Interchange Format (GIF). The new method averts the lossy Discrete Cosine Transform and quantization and can encrypt and decrypt JPEG images lossless. Our proposed method for GIF keeps the property of animation successfully. The security test results indicate the proposed methods have high s...

  4. Segmentation and Recognition of Highway Assets using Image-based 3D Point Clouds and Semantic Texton Forests

    Golparvar-Fard, Mani; Balali, Vahid; de la Garza, Jesus M.

    2013-01-01

    This dataset was collected as part of research work on segmentation and recognition of highway assets in images and viedo. The research is described in detail in Journal of Computing in Civil Engineering - ASCE paper "Segmentation and Recognition of Highway Assets using Image-based 3D Point Clouds and Semantic Texton Forests". The dataset include: 12 highway asset catgegories, 3 different dataset which is divided in three groups: (a)Ground Truth images with #_#_s_GT.jpg filename...

  5. Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging

    Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [18F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. In this paper they describe this algorithm and present scatter correction results from human and chest phantom studies

  6. Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes

    Niclass, Cristiano; Rochas, Alexis; Besse, Pierre-André; Charbon, Edoardo

    2005-01-01

    The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 × 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can b...

  7. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  8. A LabVIEW based user-friendly nano-CT image alignment and 3D reconstruction platform

    Wang, Shenghao; Wang, Zhili; Gao, Kun; Wu, Zhao; Zhu, Peiping; Wu, Ziyu

    2014-01-01

    X-ray nanometer computed tomography (nano-CT) offers applications and opportunities in many scientific researches and industrial areas. Here we present a user-friendly and fast LabVIEW based package running, after acquisition of the raw projection images, a procedure to obtain the inner structure of the sample under analysis. At first, a reliable image alignment procedure fixes possible misalignments among image series due to mechanical errors, thermal expansion and other external contributions, then a novel fast parallel beam 3D reconstruction performs the tomographic reconstruction. The remarkable improved reconstruction after the image calibration confirms the fundamental role of the image alignment procedure. It minimizes blurring and additional streaking artifacts present in a reconstructed slice that cause loss of information and faked structures in the observed material. The nano-CT image alignment and 3D reconstruction LabVIEW package significantly reducing the data process, makes faster and easier th...

  9. Development and Implementation of a Web-Enabled 3D Consultation Tool for Breast Augmentation Surgery Based on 3D-Image Reconstruction of 2D Pictures

    de Heras Ciechomski, Pablo; Constantinescu, Mihai; Garcia, Jaime; Olariu, Radu; Dindoyal, Irving; Le Huu, Serge; Reyes, Mauricio

    2012-01-01

    Background Producing a rich, personalized Web-based consultation tool for plastic surgeons and patients is challenging. Objective (1) To develop a computer tool that allows individual reconstruction and simulation of 3-dimensional (3D) soft tissue from ordinary digital photos of breasts, (2) to implement a Web-based, worldwide-accessible preoperative surgical planning platform for plastic surgeons, and (3) to validate this tool through a quality control analysis by comparing 3D laser scans of...

  10. Reconstruction of lava fields based on 3D and conventional images. Arenal volcano, Costa Rica.

    Horvath, S.; Duarte, E.; Fernandez, E.

    2007-05-01

    , chemical composition, type of lava, velocity, etc. With all this information and photographs; real, visual and topographic images of the position and characters of the 1990s and 2000s lava flows, were obtained . An illustrative poster will be presented along with this abstract to show the construction process of such tool. Moreover, 3D animations will be present in the mentioned poster.

  11. Web-based interactive visualization of 3D video mosaics using X3D standard

    CHON Jaechoon; LEE Yang-Won; SHIBASAKI Ryosuke

    2006-01-01

    We present a method of 3D image mosaicing for real 3D representation of roadside buildings, and implement a Web-based interactive visualization environment for the 3D video mosaics created by 3D image mosaicing. The 3D image mosaicing technique developed in our previous work is a very powerful method for creating textured 3D-GIS data without excessive data processing like the laser or stereo system. For the Web-based open access to the 3D video mosaics, we build an interactive visualization environment using X3D, the emerging standard of Web 3D. We conduct the data preprocessing for 3D video mosaics and the X3D modeling for textured 3D data. The data preprocessing includes the conversion of each frame of 3D video mosaics into concatenated image files that can be hyperlinked on the Web. The X3D modeling handles the representation of concatenated images using necessary X3D nodes. By employing X3D as the data format for 3D image mosaics, the real 3D representation of roadside buildings is extended to the Web and mobile service systems.

  12. Ball-scale based hierarchical multi-object recognition in 3D medical images

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  13. Acquisition and applications of 3D images

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  14. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  15. ICER-3D Hyperspectral Image Compression Software

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  16. Precise Depth Image Based Real-Time 3D Difference Detection

    Kahn, Svenja

    2014-01-01

    3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the ...

  17. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  18. Real-time 3D millimeter wave imaging based FMCW using GGD focal plane array as detectors

    Levanon, Assaf; Rozban, Daniel; Kopeika, Natan S.; Yitzhaky, Yitzhak; Abramovich, Amir

    2014-03-01

    Millimeter wave (MMW) imaging systems are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is relatively low. The lack of inexpensive room temperature imaging systems makes it difficult to give a suitable MMW system for many of the above applications. 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with a Glow Discharge Detector (GDD) Focal Plane Array (FPA) of plasma based detectors. Each point on the object corresponds to a point in the image and includes the distance information. This will enable 3D MMW imaging. The radar system requires that the millimeter wave detector (GDD) will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the image. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of GDD devices. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  19. Heat Equation to 3D Image Segmentation

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  20. Cloud-Based Geospatial 3D Image Spaces—A Powerful Urban Model for the Smart City

    Stephan Nebiker

    2015-10-01

    Full Text Available In this paper, we introduce the concept and an implementation of geospatial 3D image spaces as new type of native urban models. 3D image spaces are based on collections of georeferenced RGB-D imagery. This imagery is typically acquired using multi-view stereo mobile mapping systems capturing dense sequences of street level imagery. Ideally, image depth information is derived using dense image matching. This delivers a very dense depth representation and ensures the spatial and temporal coherence of radiometric and depth data. This results in a high-definition WYSIWYG (“what you see is what you get” urban model, which is intuitive to interpret and easy to interact with, and which provides powerful augmentation and 3D measuring capabilities. Furthermore, we present a scalable cloud-based framework for generating 3D image spaces of entire cities or states and a client architecture for their web-based exploitation. The model and the framework strongly support the smart city notion of efficiently connecting the urban environment and its processes with experts and citizens alike. In the paper we particularly investigate quality aspects of the urban model, namely the obtainable georeferencing accuracy and the quality of the depth map extraction. We show that our image-based georeferencing approach is capable of improving the original direct georeferencing accuracy by an order of magnitude and that the presented new multi-image matching approach is capable of providing high accuracies along with a significantly improved completeness of the depth maps.

  1. Refraction-based 2D, 2.5D and 3D medical imaging: Stepping forward to a clinical trial

    An attempt at refraction-based 2D, 2.5D and 3D X-ray imaging of articular cartilage and breast carcinoma is reported. We are developing very high contrast X-ray 2D imaging with XDFI (X-ray dark-field imaging), X-ray CT whose data are acquired by DEI (diffraction-enhanced imaging) and tomosynthesis due to refraction contrast. 2D and 2.5D images were taken with nuclear plates or with X-ray films. Microcalcification of breast cancer and articular cartilage are clearly visible. 3D data were taken with an X-ray sensitive CCD camera. The 3D image was successfully reconstructed by the use of an algorithm newly made by our group. This shows a distinctive internal structure of a ductus lactiferi (milk duct) that contains inner wall, intraductal carcinoma and multifocal calcification in the necrotic core of the continuous DCIS (ductal carcinoma in situ). Furthermore consideration of clinical applications of these contrasts made us to try tomosynthesis. This attempt was satisfactory from the view point of articular cartilage image quality and the skin radiation dose

  2. Refraction-based 2D, 2.5D and 3D medical imaging: Stepping forward to a clinical trial

    Ando, Masami [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan)], E-mail: msm-ando@rs.noda.tus.ac.jp; Bando, Hiroko [Tsukuba University (Japan); Tokiko, Endo; Ichihara, Shu [Nagoya Medical Center (Japan); Hashimoto, Eiko [GUAS (Japan); Hyodo, Kazuyuki [KEK (Japan); Kunisada, Toshiyuki [Okayama University (Japan); Li Gang [BSRF (China); Maksimenko, Anton [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan); KEK (Japan); Mori, Kensaku [Nagoya University (Japan); Shimao, Daisuke [IPU (Japan); Sugiyama, Hiroshi [KEK (Japan); Yuasa, Tetsuya [Yamagata University (Japan); Ueno, Ei [Tsukuba University (Japan)

    2008-12-15

    An attempt at refraction-based 2D, 2.5D and 3D X-ray imaging of articular cartilage and breast carcinoma is reported. We are developing very high contrast X-ray 2D imaging with XDFI (X-ray dark-field imaging), X-ray CT whose data are acquired by DEI (diffraction-enhanced imaging) and tomosynthesis due to refraction contrast. 2D and 2.5D images were taken with nuclear plates or with X-ray films. Microcalcification of breast cancer and articular cartilage are clearly visible. 3D data were taken with an X-ray sensitive CCD camera. The 3D image was successfully reconstructed by the use of an algorithm newly made by our group. This shows a distinctive internal structure of a ductus lactiferi (milk duct) that contains inner wall, intraductal carcinoma and multifocal calcification in the necrotic core of the continuous DCIS (ductal carcinoma in situ). Furthermore consideration of clinical applications of these contrasts made us to try tomosynthesis. This attempt was satisfactory from the view point of articular cartilage image quality and the skin radiation dose.

  3. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  4. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  5. Dixon imaging-based partial volume correction improves quantification of choline detected by breast 3D-MRSI

    Minarikova, Lenka; Gruber, Stephan; Bogner, Wolfgang; Trattnig, Siegfried; Chmelik, Marek [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, MR Center of Excellence, Vienna (Austria); Pinker-Domenig, Katja; Baltzer, Pascal A.T.; Helbich, Thomas H. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria)

    2014-09-14

    Our aim was to develop a partial volume (PV) correction method of choline (Cho) signals detected by breast 3D-magnetic resonance spectroscopic imaging (3D-MRSI), using information from water/fat-Dixon MRI. Following institutional review board approval, five breast cancer patients were measured at 3 T. 3D-MRSI (1 cm{sup 3} resolution, duration ∝11 min) and Dixon MRI (1 mm{sup 3}, ∝2 min) were measured in vivo and in phantoms. Glandular/lesion tissue was segmented from water/fat-Dixon MRI and transformed to match the resolution of 3D-MRSI. The resulting PV values were used to correct Cho signals. Our method was validated on a two-compartment phantom (choline/water and oil). PV values were correlated with the spectroscopic water signal. Cho signal variability, caused by partial-water/fat content, was tested in 3D-MRSI voxels located in/near malignant lesions. Phantom measurements showed good correlation (r = 0.99) with quantified 3D-MRSI water signals, and better homogeneity after correction. The dependence of the quantified Cho signal on the water/fat voxel composition was significantly (p < 0.05) reduced using Dixon MRI-based PV correction, compared to the original uncorrected data (1.60-fold to 3.12-fold) in patients. The proposed method allows quantification of the Cho signal in glandular/lesion tissue independent of water/fat composition in breast 3D-MRSI. This can improve the reproducibility of breast 3D-MRSI, particularly important for therapy monitoring. (orig.)

  6. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT-based

  7. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  8. 3-D SAR image formation from sparse aperture data using 3-D target grids

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  9. Automated segmentation method for the 3D ultrasound carotid image based on geometrically deformable model with automatic merge function

    Li, Xiang; Wang, Zigang; Lu, Hongbing; Liang, Zhengrong

    2002-05-01

    Stenosis of the carotid is the most common cause of the stroke. The accurate measurement of the volume of the carotid and visualization of its shape are helpful in improving diagnosis and minimizing the variability of assessment of the carotid disease. Due to the complex anatomic structure of the carotid, it is mandatory to define the initial contours in every slice, which is very difficult and usually requires tedious manual operations. The purpose of this paper is to propose an automatic segmentation method, which automatically provides the contour of the carotid from the 3-D ultrasound image and requires minimum user interaction. In this paper, we developed the Geometrically Deformable Model (GDM) with automatic merge function. In our algorithm, only two initial contours in the topmost slice and four parameters are needed in advance. Simulated 3-D ultrasound image was used to test our algorithm. 3-D display of the carotid obtained by our algorithm showed almost identical shape with true 3-D carotid image. In addition, experimental results also demonstrated that error of the volume measurement of the carotid based on the three different initial contours is less that 1% and its speed was a very fast.

  10. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  11. Metrological characterization of 3D imaging devices

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  12. Design and Characterization of a Current Assisted Photo Mixing Demodulator for Tof Based 3d Cmos Image Sensor

    Hossain, Quazi Delwar

    2010-01-01

    Due to the increasing demand for 3D vision systems, many efforts have been recently concentrated to achieve complete 3D information analogous to human eyes. Scannerless optical range imaging systems are emerging as an interesting alternative to conventional intensity imaging in a variety of applications, including pedestrian security, biomedical appliances, robotics and industrial control etc. For this, several studies have reported to produce 3D images including stereovision, object distance...

  13. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-03-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up.

  14. Image-Based 3D Modeling as a Documentation Method for Zooarchaeological Remains in Waste-Related Contexts

    Stella Macheridis

    2015-01-01

    During the last twenty years archaeology has experienced a technological revolution that spans scientific achievements and day-to-day practices. The tools and methods from this digital change have also strongly impacted archaeology. Image-based 3D modeling is becoming more common when documenting archaeological features but is still not implemented as standard in field excavation projects. When it comes to integrating zooarchaeological perspectives in the interpretational process in the field...

  15. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  16. A density-based segmentation for 3D images, an application for X-ray micro-tomography.

    Tran, Thanh N; Nguyen, Thanh T; Willemsz, Tofan A; van Kessel, Gijs; Frijlink, Henderik W; van der Voort Maarschalk, Kees

    2012-05-01

    Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image. PMID:22502607

  17. 3D near-infrared imaging based on a single-photon avalanche diode array sensor

    Mata Pavia, J.; Charbon, E.; Wolf, M.

    2011-01-01

    An imager for optical tomography was designed based on a detector with 128x128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in w

  18. A parallelized surface extraction algorithm for large binary image data sets based on an adaptive 3D delaunay subdivision strategy.

    Ma, Yingliang; Saetzler, Kurt

    2008-01-01

    In this paper we describe a novel 3D subdivision strategy to extract the surface of binary image data. This iterative approach generates a series of surface meshes that capture different levels of detail of the underlying structure. At the highest level of detail, the resulting surface mesh generated by our approach uses only about 10% of the triangles in comparison to the marching cube algorithm (MC) even in settings were almost no image noise is present. Our approach also eliminates the so-called "staircase effect" which voxel based algorithms like the MC are likely to show, particularly if non-uniformly sampled images are processed. Finally, we show how the presented algorithm can be parallelized by subdividing 3D image space into rectilinear blocks of subimages. As the algorithm scales very well with an increasing number of processors in a multi-threaded setting, this approach is suited to process large image data sets of several gigabytes. Although the presented work is still computationally more expensive than simple voxel-based algorithms, it produces fewer surface triangles while capturing the same level of detail, is more robust towards image noise and eliminates the above-mentioned "staircase" effect in anisotropic settings. These properties make it particularly useful for biomedical applications, where these conditions are often encountered. PMID:17993710

  19. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    Wang, Sheng-Hao; Zhang, Kai; Wang, Zhi-Li; Gao, Kun; Wu, Zhao; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly, and fast software package based on LabVIEW that may allow us to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion, and other external factors has been considered, together with a novel fast parallel beam 3D reconstruction procedure that was developed ad hoc to perform the tomographic reconstruction. We have obtained remarkably improved reconstruction results at the Beijing Synchrotron Radiation Facility after the image calibration, the fundamental role of this image alignment procedure was confirmed, which minimizes the unwanted blurs and additional streaking artifacts that are always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure are fully based on LabVIEW routines, significantly reducing the data post-processing cycle, thus making the activity of the users faster and easier during experimental runs.

  20. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification

  1. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research. PMID:26756406

  2. A new navigation approach of terrain contour matching based on 3-D terrain reconstruction from onboard image sequence

    2010-01-01

    This article presents a passive navigation method of terrain contour matching by reconstructing the 3-D terrain from the image sequence(acquired by the onboard camera).To achieve automation and simultaneity of the image sequence processing for navigation,a correspondence registration method based on control points tracking is proposed which tracks the sparse control points through the whole image sequence and uses them as correspondence in the relation geometry solution.Besides,a key frame selection method based on the images overlapping ratio and intersecting angles is explored,thereafter the requirement for the camera system configuration is provided.The proposed method also includes an optimal local homography estimating algorithm according to the control points,which helps correctly predict points to be matched and their speed corresponding.Consequently,the real-time 3-D terrain of the trajectory thus reconstructed is matched with the referenced terrain map,and the result of which provides navigating information.The digital simulation experiment and the real image based experiment have verified the proposed method.

  3. 2D-3D registration for prostate radiation therapy based on a statistical model of transmission images

    Purpose: In external beam radiation therapy of pelvic sites, patient setup errors can be quantified by registering 2D projection radiographs acquired during treatment to a 3D planning computed tomograph (CT). We present a 2D-3D registration framework based on a statistical model of the intensity values in the two imaging modalities. Methods: The model assumes that intensity values in projection radiographs are independently but not identically distributed due to the nonstationary nature of photon counting noise. Two probability distributions are considered for the intensity values: Poisson and Gaussian. Using maximum likelihood estimation, two similarity measures, maximum likelihood with a Poisson (MLP) and maximum likelihood with Gaussian (MLG), distribution are derived. Further, we investigate the merit of the model-based registration approach for data obtained with current imaging equipment and doses by comparing the performance of the similarity measures derived to that of the Pearson correlation coefficient (ICC) on accurately collected data of an anthropomorphic phantom of the pelvis and on patient data. Results: Registration accuracy was similar for all three similarity measures and surpassed current clinical requirements of 3 mm for pelvic sites. For pose determination experiments with a kilovoltage (kV) cone-beam CT (CBCT) and kV projection radiographs of the phantom in the anterior-posterior (AP) view, registration accuracies were 0.42 mm (MLP), 0.29 mm (MLG), and 0.29 mm (ICC). For kV CBCT and megavoltage (MV) AP portal images of the same phantom, registration accuracies were 1.15 mm (MLP), 0.90 mm (MLG), and 0.69 mm (ICC). Registration of a kV CT and MV AP portal images of a patient was successful in all instances. Conclusions: The results indicate that high registration accuracy is achievable with multiple methods including methods that are based on a statistical model of a 3D CT and 2D projection images.

  4. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K. [Univ. of Nebraska Medical Center, Omaha, NE (United States)

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  5. Hyper-hemispheric lens distortion model for 3D-imaging SPAD-array-based applications

    Pernechele, Claudio; Villa, Federica A.

    2015-09-01

    Panoramic omnidirectional lenses have the typical draw-back effect to obscure the frontal view, producing the classic "donut-shape" image in the focal plane. We realized a panoramic lens in which the frontal field is make available to be imaged in the focal plane together with the panoramic field, producing a FoV of 360° in azimuth and 270° in elevation; it have then the capabilities of a fish eye plus those of a panoramic lens: we call it hyper-hemispheric lens. We built and test an all-spherical hyper-hemispheric lens. The all-spherical configuration suffer for the typical issues of all ultra wide angle lenses: there is a large distortion at high view angles. The fundamental origin of the optical problems resides on the fact that chief rays angles on the object side are not preserved passing through the optics preceding the aperture stop (fore-optics). This effect produce an image distortion on the focal plane, with the focal length changing along the elevation angles. Moreover, the entrance pupil is shifting at large angle, where the paraxial approximation is not more valid, and tracing the rays appropriately require some effort to the optical designer. It has to be noted here as the distortion is not a source-point-aberrations: it is present also in well corrected optical lenses. Image distortion may be partially corrected using aspheric surface. We describe here how we correct it for our original hyper-hemispheric lens by designing an aspheric surface within the optical train and optimized for a Single Photon Avalanche Diode (SPAD) array-based imaging applications.

  6. 3D Assessment of Mandibular Growth Based on Image Registration: A Feasibility Study in a Rabbit Model

    I. Kim

    2014-01-01

    Full Text Available Background. Our knowledge of mandibular growth mostly derives from cephalometric radiography, which has inherent limitations due to the two-dimensional (2D nature of measurement. Objective. To assess 3D morphological changes occurring during growth in a rabbit mandible. Methods. Serial cone-beam computerised tomographic (CBCT images were made of two New Zealand white rabbits, at baseline and eight weeks after surgical implantation of 1 mm diameter metallic spheres as fiducial markers. A third animal acted as an unoperated (no implant control. CBCT images were segmented and registered in 3D (Implant Superimposition and Procrustes Method, and the remodelling pattern described used color maps. Registration accuracy was quantified by the maximal of the mean minimum distances and by the Hausdorff distance. Results. The mean error for image registration was 0.37 mm and never exceeded 1 mm. The implant-based superimposition showed most remodelling occurred at the mandibular ramus, with bone apposition posteriorly and vertical growth at the condyle. Conclusion. We propose a method to quantitatively describe bone remodelling in three dimensions, based on the use of bone implants as fiducial markers and CBCT as imaging modality. The method is feasible and represents a promising approach for experimental studies by comparing baseline growth patterns and testing the effects of growth-modification treatments.

  7. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  8. 3D Model Assisted Image Segmentation

    Jayawardena, Srimal; Hutter, Marcus

    2012-01-01

    The problem of segmenting a given image into coherent regions is important in Computer Vision and many industrial applications require segmenting a known object into its components. Examples include identifying individual parts of a component for process control work in a manufacturing plant and identifying parts of a car from a photo for automatic damage detection. Unfortunately most of an object's parts of interest in such applications share the same pixel characteristics, having similar colour and texture. This makes segmenting the object into its components a non-trivial task for conventional image segmentation algorithms. In this paper, we propose a "Model Assisted Segmentation" method to tackle this problem. A 3D model of the object is registered over the given image by optimising a novel gradient based loss function. This registration obtains the full 3D pose from an image of the object. The image can have an arbitrary view of the object and is not limited to a particular set of views. The segmentation...

  9. Image quality assessment of LaBr3-based whole-body 3D PET scanners: a Monte Carlo evaluation

    The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr3 detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr3 has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr3 without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr3 are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr3 scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr3 scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels

  10. Curve-based 2D-3D registration of coronary vessels for image guided procedure

    Duong, Luc; Liao, Rui; Sundar, Hari; Tailhades, Benoit; Meyer, Andreas; Xu, Chenyang

    2009-02-01

    3D roadmap provided by pre-operative volumetric data that is aligned with fluoroscopy helps visualization and navigation in Interventional Cardiology (IC), especially when contrast agent-injection used to highlight coronary vessels cannot be systematically used during the whole procedure, or when there is low visibility in fluoroscopy for partially or totally occluded vessels. The main contribution of this work is to register pre-operative volumetric data with intraoperative fluoroscopy for specific vessel(s) occurring during the procedure, even without contrast agent injection, to provide a useful 3D roadmap. In addition, this study incorporates automatic ECG gating for cardiac motion. Respiratory motion is identified by rigid body registration of the vessels. The coronary vessels are first segmented from a multislice computed tomography (MSCT) volume and correspondent vessel segments are identified on a single gated 2D fluoroscopic frame. Registration can be explicitly constrained using one or multiple branches of a contrast-enhanced vessel tree or the outline of guide wire used to navigate during the procedure. Finally, the alignment problem is solved by Iterative Closest Point (ICP) algorithm. To be computationally efficient, a distance transform is computed from the 2D identification of each vessel such that distance is zero on the centerline of the vessel and increases away from the centerline. Quantitative results were obtained by comparing the registration of random poses and a ground truth alignment for 5 datasets. We conclude that the proposed method is promising for accurate 2D-3D registration, even for difficult cases of occluded vessel without injection of contrast agent.

  11. Automatic histogram-based segmentation of white matter hyperintensities using 3D FLAIR images

    Simões, Rita; Slump, Cornelis; Moenninghoff, Christoph; Wanke, Isabel; Dlugaj, Martha; Weimar, Christian

    2012-03-01

    White matter hyperintensities are known to play a role in the cognitive decline experienced by patients suffering from neurological diseases. Therefore, accurately detecting and monitoring these lesions is of importance. Automatic methods for segmenting white matter lesions typically use multimodal MRI data. Furthermore, many methods use a training set to perform a classification task or to determine necessary parameters. In this work, we describe and evaluate an unsupervised segmentation method that is based solely on the histogram of FLAIR images. It approximates the histogram by a mixture of three Gaussians in order to find an appropriate threshold for white matter hyperintensities. We use a context-sensitive Expectation-Maximization method to determine the Gaussian mixture parameters. The segmentation is subsequently corrected for false positives using the knowledge of the location of typical FLAIR artifacts. A preliminary validation with the ground truth on 6 patients revealed a Similarity Index of 0.73 +/- 0.10, indicating that the method is comparable to others in the literature which require multimodal MRI and/or a preliminary training step.

  12. Multiplane 3D superresolution optical fluctuation imaging

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  13. 3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-11-10

    This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach. PMID:26560789

  14. Commissioning of a 3D image-based treatment planning system for high-dose-rate brachytherapy of cervical cancer.

    Kim, Yongbok; Modrick, Joseph M; Pennington, Edward C; Kim, Yusung

    2016-01-01

    The objective of this work is to present commissioning procedures to clinically implement a three-dimensional (3D), image-based, treatment-planning system (TPS) for high-dose-rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8-1.0 mm on MRI when compared with X-rays. In-house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose-volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image-based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End-to-end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image-based TPS for HDR

  15. Intersection-based registration of slice stacks to form 3D images of the human fetal brain

    Kim, Kio; Hansen, Mads Fogtmann; Habas, Piotr;

    2008-01-01

    Clinical fetal MR imaging of the brain commonly makes use of fast 2D acquisitions of multiple sets of approximately orthogonal 2D slices. We and others have previously proposed an iterative slice-to-volume registration process to recover a geometrically consistent 3D image. However, these...... approaches depend on a 3D volume reconstruction step during the slice alignment. This is both computationally expensive and makes the convergence of the registration process poorly defined. In this paper our key contribution is a new approach which considers the collective alignment of all slices directly......, via shared structure in their intersections, rather than to an estimated 3D volume. We derive an analytical expression for the gradient of the collective similarity of the slices along their intersections, with respect to the 3D location and orientation of each 2D slice. We include examples of the...

  16. Novel methodology for 3D reconstruction of carotid arteries and plaque characterization based upon magnetic resonance imaging carotid angiography data.

    Sakellarios, Antonis I; Stefanou, Kostas; Siogkas, Panagiotis; Tsakanikas, Vasilis D; Bourantas, Christos V; Athanasiou, Lambros; Exarchos, Themis P; Fotiou, Evangelos; Naka, Katerina K; Papafaklis, Michail I; Patterson, Andrew J; Young, Victoria E L; Gillard, Jonathan H; Michalis, Lampros K; Fotiadis, Dimitrios I

    2012-10-01

    In this study, we present a novel methodology that allows reliable segmentation of the magnetic resonance images (MRIs) for accurate fully automated three-dimensional (3D) reconstruction of the carotid arteries and semiautomated characterization of plaque type. Our approach uses active contours to detect the luminal borders in the time-of-flight images and the outer vessel wall borders in the T(1)-weighted images. The methodology incorporates the connecting components theory for the automated identification of the bifurcation region and a knowledge-based algorithm for the accurate characterization of the plaque components. The proposed segmentation method was validated in randomly selected MRI frames analyzed offline by two expert observers. The interobserver variability of the method for the lumen and outer vessel wall was -1.60%±6.70% and 0.56%±6.28%, respectively, while the Williams Index for all metrics was close to unity. The methodology implemented to identify the composition of the plaque was also validated in 591 images acquired from 24 patients. The obtained Cohen's k was 0.68 (0.60-0.76) for lipid plaques, while the time needed to process an MRI sequence for 3D reconstruction was only 30 s. The obtained results indicate that the proposed methodology allows reliable and automated detection of the luminal and vessel wall borders and fast and accurate characterization of plaque type in carotid MRI sequences. These features render the currently presented methodology a useful tool in the clinical and research arena. PMID:22617149

  17. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David;

    2007-01-01

    and finds the optimal hardware and light source setup before investing in expensive equipment and field experiments. It was expected to be a valuable tool to structure the otherwise incomprehensibly large information space and to see relationships between parameter configurations and crop features. Images...... for in the simulation. However, there were exceptions where there were structural differences between the virtual plant and the real plant that were unaccounted for by its category. The test framework was evaluated to be a valuable tool to uncover information from complex data structures....

  18. Intersection-based registration of slice stacks to form 3D images of the human fetal brain

    Kim, Kio; Hansen, Mads Fogtmann; Habas, Piotr; Rousseau, F.; Glen, O. A.; Barkovich, A. J.; Studholme, Colin

    2008-01-01

    Clinical fetal MR imaging of the brain commonly makes use of fast 2D acquisitions of multiple sets of approximately orthogonal 2D slices. We and others have previously proposed an iterative slice-to-volume registration process to recover a geometrically consistent 3D image. However, these...

  19. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  20. Miniaturized 3D microscope imaging system

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  1. 3D Buildings Extraction from Aerial Images

    Melnikova, O.; Prandi, F.

    2011-09-01

    This paper introduces a semi-automatic method for buildings extraction through multiple-view aerial image analysis. The advantage of the used semi-automatic approach is that it allows processing of each building individually finding the parameters of buildings features extraction more precisely for each area. On the early stage the presented technique uses an extraction of line segments that is done only inside of areas specified manually. The rooftop hypothesis is used further to determine a subset of quadrangles, which could form building roofs from a set of extracted lines and corners obtained on the previous stage. After collecting of all potential roof shapes in all images overlaps, the epipolar geometry is applied to find matching between images. This allows to make an accurate selection of building roofs removing false-positive ones and to identify their global 3D coordinates given camera internal parameters and coordinates. The last step of the image matching is based on geometrical constraints in contrast to traditional correlation. The correlation is applied only in some highly restricted areas in order to find coordinates more precisely, in such a way significantly reducing processing time of the algorithm. The algorithm has been tested on a set of Milan's aerial images and shows highly accurate results.

  2. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan. Advances and obstacles

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. (author)

  3. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan: advances and obstacles.

    Ohno, Tatsuya; Toita, Takafumi; Tsujino, Kayoko; Uchida, Nobue; Hatano, Kazuo; Nishimura, Tetsuo; Ishikura, Satoshi

    2015-11-01

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. PMID:26265660

  4. Full-field wing deformation measurement scheme for in-flight cantilever monoplane based on 3D digital image correlation

    Li, Lei-Gang; Liang, Jin; Guo, Xiang; Guo, Cheng; Hu, Hao; Tang, Zheng-Zong

    2014-06-01

    In this paper, a new non-contact scheme, based on 3D digital image correlation technology, is presented to measure the full-field wing deformation of in-flight cantilever monoplanes. Because of the special structure of the cantilever wing, two conjugated camera groups, which are rigidly connected and calibrated to an ensemble respectively, are installed onto the vertical fin of the aircraft and record the whole measurement. First, a type of pre-stretched target and speckle pattern are designed to adapt the oblique camera view for accurate detection and correlation. Then, because the measurement cameras are swinging with the aircraft vertical trail all the time, a camera position self-correction method (using control targets sprayed on the back of the aircraft), is designed to orientate all the cameras’ exterior parameters to a unified coordinate system in real time. Besides, for the excessively inclined camera axis and the vertical camera arrangement, a weak correlation between the high position image and low position image occurs. In this paper, a new dual-temporal efficient matching method, combining the principle of seed point spreading, is proposed to achieve the matching of weak correlated images. A novel system is developed and a simulation test in the laboratory was carried out to verify the proposed scheme.

  5. Full-field wing deformation measurement scheme for in-flight cantilever monoplane based on 3D digital image correlation

    In this paper, a new non-contact scheme, based on 3D digital image correlation technology, is presented to measure the full-field wing deformation of in-flight cantilever monoplanes. Because of the special structure of the cantilever wing, two conjugated camera groups, which are rigidly connected and calibrated to an ensemble respectively, are installed onto the vertical fin of the aircraft and record the whole measurement. First, a type of pre-stretched target and speckle pattern are designed to adapt the oblique camera view for accurate detection and correlation. Then, because the measurement cameras are swinging with the aircraft vertical trail all the time, a camera position self-correction method (using control targets sprayed on the back of the aircraft), is designed to orientate all the cameras’ exterior parameters to a unified coordinate system in real time. Besides, for the excessively inclined camera axis and the vertical camera arrangement, a weak correlation between the high position image and low position image occurs. In this paper, a new dual-temporal efficient matching method, combining the principle of seed point spreading, is proposed to achieve the matching of weak correlated images. A novel system is developed and a simulation test in the laboratory was carried out to verify the proposed scheme. (paper)

  6. Image-Based 3D Modeling as a Documentation Method for Zooarchaeological Remains in Waste-Related Contexts

    Stella Macheridis

    2015-12-01

    Full Text Available During the last twenty years archaeology has experienced a technological revolution that spans scientific achievements and day-to-day practices. The tools and methods from this digital change have also strongly impacted archaeology. Image-based 3D modeling is becoming more common when documenting archaeological features but is still not implemented as standard in field excavation projects. When it comes to integrating zooarchaeological perspectives in the interpretational process in the field, this type of documentation is a powerful tool, especially regarding visualization related to reconstruction and resolution. Also, with the implementation of image-based 3D modeling, the use of digital documentation in the field has been proven to be time- and cost effective (e.g., De Reu et al. 2014; De Reu et al. 2013; Dellepiane et al. 2013; Verhoeven et al. 2012. Few studies have been published on the digital documentation of faunal remains in archaeological contexts. As a case study, the excavation of the infill of a clay bin from building 102 in the Neolithic settlement of Ҫatalhöyük is presented. Alongside traditional documentation, infill was photographed in sequence at each second centimeter of soil removal. The photographs were processed with Agisoft Photoscan. Seven models were made, enabling reconstruction of the excavation of this context. This technique can be a powerful documentation tool, including recording notes of zooarchaeological significance, such as markers of taphonomic processes. An important methodological advantage in this regard is the potential to measure bones in situ in for analysis after excavation.

  7. 3D nanoscale imaging of biological samples with laboratory-based soft X-ray sources

    Dehlinger, Aurélie; Blechschmidt, Anne; Grötzsch, Daniel; Jung, Robert; Kanngießer, Birgit; Seim, Christian; Stiel, Holger

    2015-09-01

    In microscopy, where the theoretical resolution limit depends on the wavelength of the probing light, radiation in the soft X-ray regime can be used to analyze samples that cannot be resolved with visible light microscopes. In the case of soft X-ray microscopy in the water-window, the energy range of the radiation lies between the absorption edges of carbon (at 284 eV, 4.36 nm) and oxygen (543 eV, 2.34 nm). As a result, carbon-based structures, such as biological samples, posses a strong absorption, whereas e.g. water is more transparent to this radiation. Microscopy in the water-window, therefore, allows the structural investigation of aqueous samples with resolutions of a few tens of nanometers and a penetration depth of up to 10μm. The development of highly brilliant laser-produced plasma-sources has enabled the transfer of Xray microscopy, that was formerly bound to synchrotron sources, to the laboratory, which opens the access of this method to a broader scientific community. The Laboratory Transmission X-ray Microscope at the Berlin Laboratory for innovative X-ray technologies (BLiX) runs with a laser produced nitrogen plasma that emits radiation in the soft X-ray regime. The mentioned high penetration depth can be exploited to analyze biological samples in their natural state and with several projection angles. The obtained tomogram is the key to a more precise and global analysis of samples originating from various fields of life science.

  8. An object-based approach to image/video-based synthesis and processing for 3-D and multiview televisions

    Chan, SC; Ng, KT; Ho, KL; Gan, ZF; Shum, HY

    2009-01-01

    This paper proposes an object-based approach to a class of dynamic image-based representations called "plenoptic videos," where the plenoptic video sequences are segmented into image-based rendering (IBR) objects each with its image sequence, depth map, and other relevant information such as shape and alpha information. This allows desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects to be supported. Moreover, the rendering...

  9. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  10. Significant acceleration of 2D-3D registration-based fusion of ultrasound and x-ray images by mesh-based DRR rendering

    Kaiser, Markus; John, Matthias; Borsdorf, Anja; Mountney, Peter; Ionasec, Razvan; Nöttling, Alois; Kiefer, Philipp; Seeburger, Jörg; Neumuth, Thomas

    2013-03-01

    For transcatheter-based minimally invasive procedures in structural heart disease ultrasound and X-ray are the two enabling imaging modalities. A live fusion of both real-time modalities can potentially improve the workflow and the catheter navigation by combining the excellent instrument imaging of X-ray with the high-quality soft tissue imaging of ultrasound. A recently published approach to fuse X-ray fluoroscopy with trans-esophageal echo (TEE) registers the ultrasound probe to X-ray images by a 2D-3D registration method which inherently provides a registration of ultrasound images to X-ray images. In this paper, we significantly accelerate the 2D-3D registration method in this context. The main novelty is to generate the projection images (DRR) of the 3D object not via volume ray-casting but instead via a fast rendering of triangular meshes. This is possible, because in the setting for TEE/X-ray fusion the 3D geometry of the ultrasound probe is known in advance and their main components can be described by triangular meshes. We show that the new approach can achieve a speedup factor up to 65 and does not affect the registration accuracy when used in conjunction with the gradient correlation similarity measure. The improvement is independent of the underlying registration optimizer. Based on the results, a TEE/X-ray fusion could be performed with a higher frame rate and a shorter time lag towards real-time registration performance. The approach could potentially accelerate other applications of 2D-3D registrations, e.g. the registration of implant models with X-ray images.

  11. Use of 3D imaging in CT of the acute trauma patient: impact of a PACS-based software package.

    Soto, Jorge A; Lucey, Brain C; Stuhlfaut, Joshua W; Varghese, Jose C

    2005-04-01

    To evaluate the impact of a picture archiving and communication systems (PACS)-based software package on the requests for 3D reconstructions of multidetector CT (MDCT) data sets in the emergency radiology of a level 1 trauma center, we reviewed the number and type of physician requests for 3D reconstructions of MDCT data sets for patients admitted after sustaining multiple trauma, during a 12-month period (January 2003-December 2003). During the first 5 months of the study, 3D reconstructions were performed in dedicated workstations located separately from the emergency radiology CT interpretation area. During the last 7 months of the study, reconstructions were performed online by the attending radiologist or resident on duty, using a software package directly incorporated into the PACS workstations. The mean monthly number of 3D reconstructions requested during the two time periods was compared using Student's t test. The monthly mean +/- SD of 3D reconstructions performed before and after 3D software incorporation into the PACS was 34+/-7 (95% CI, 10-58) and 132+/-31 (95% CI, 111-153), respectively. This difference was statistically significant (p<0.0001). In the multiple trauma patient, implementation of PACS-integrated software increases utilization of 3D reconstructions of MDCT data sets. PMID:16028324

  12. 3D camera tracking from disparity images

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  13. Volumetric label-free imaging and 3D reconstruction of mammalian cochlea based on two-photon excitation fluorescence microscopy

    The visualization of the delicate structure and spatial relationship of intracochlear sensory cells has relied on the laborious procedures of tissue excision, fixation, sectioning and staining for light and electron microscopy. Confocal microscopy is advantageous for its high resolution and deep penetration depth, yet disadvantageous due to the necessity of exogenous labeling. In this study, we present the volumetric imaging of rat cochlea without exogenous dyes using a near-infrared femtosecond laser as the excitation mechanism and endogenous two-photon excitation fluorescence (TPEF) as the contrast mechanism. We find that TPEF exhibits strong contrast, allowing cellular and even subcellular resolution imaging of the cochlea, differentiating cell types, visualizing delicate structures and the radial nerve fiber. Our results further demonstrate that 3D reconstruction rendered with z-stacks of optical sections enables better revealment of fine structures and spatial relationships, and easily performed morphometric analysis. The TPEF-based optical biopsy technique provides great potential for new and sensitive diagnostic tools for hearing loss or hearing disorders, especially when combined with fiber-based microendoscopy. (paper)

  14. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

    Li, Ruijiang; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-01-01

    Purpose: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Methods: Given a set of volumetric images of a patient at N breathing phases as the training data, we perform deformable image registration between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, we can generate new DVFs, which, when applied on the reference image, lead to new volumetric images. We then can reconstruct a volumetric image from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. Our algorithm was implemented on graphics processing units...

  15. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    Cambridge : The Electromagnetics Academy, 2010, s. 1043-1046. ISBN 978-1-934142-14-1. [PIERS 2010 Cambridge. Cambridge (US), 05.07.2010-08.07.2010] R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : 3D reconstruction * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  16. Feasibility of 3D harmonic contrast imaging

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; Cate, ten F.; Jong, de N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suit

  17. Highway 3D model from image and lidar data

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  18. The role of 3-D imaging and computer-based postprocessing for surgery of the liver and pancreas

    Cross-sectional imaging based on navigation and virtual reality planning tools are well-established in the surgical routine in orthopedic surgery and neurosurgery. In various procedures, they have achieved a significant clinical relevance and efficacy and have enhanced the discipline's resection capabilities. In abdominal surgery, however, these tools have gained little attraction so far. Even with the advantage of fast and high resolution cross-sectional liver and pancreas imaging, it remains unclear whether 3D planning and interactive planning tools might increase precision and safety of liver and pancreas surgery. The inability to simply transfer the methodology from orthopedic or neurosurgery is mainly a result of intraoperative organ movements and shifting and corresponding technical difficulties in the on-line applicability of presurgical cross sectional imaging data. For the interactive planning of liver surgery, three systems partly exist in daily routine: HepaVision2 (MeVis GmbH, Bremen), LiverLive (Navidez Ltd. Slovenia) and OrgaNicer (German Cancer Research Center, Heidelberg). All these systems have realized a half- or full-automatic liver-segmentation procedure to visualize liver segments, vessel trees, resected volumes or critical residual organ volumes, either for preoperative planning or intraoperative visualization. Acquisition of data is mainly based on computed tomography. Three-dimensional navigation for intraoperative surgical guidance with ultrasound is part of the clinical testing. There are only few reports about the transfer of the visualization of the pancreas, probably caused by the difficulties with the segmentation routine due to inflammation or organ-exceeding tumor growth. With this paper, we like to evaluate and demonstrate the present status of software planning tools and pathways for future pre- and intraoperative resection planning in liver and pancreas surgery. (orig.)

  19. Distributed microscopy: toward a 3D computer-graphic-based multiuser microscopic manipulation, imaging, and measurement system

    Sulzmann, Armin; Carlier, Jerome; Jacot, Jacques

    1996-10-01

    The aim of this project is to telecontrol the movements in 3D-space of a microscope in order to manipulate and measure microsystems or micro parts aided by multi-user virtual reality (VR) environments. Presently microsystems are gaining in interest. Microsystems are small, independent modules, incorporating various functions, such as electronic, micro mechanical, data processing, optical, chemical, medical and biological functions. Though improving the manufacturing technologies, the measurement of the small structures to insure the quality of the process is a key information for the development. So far to measure the micro structures strong microscopes are needed. The use of highly magnifying computerized microscopes is expensive. To insure high quality measurements and distribute the acquired information to multi-user our proposed system is divided into three parts: the virtual reality microscopic environment (VRME)-based user-interface on a SGI workstation to prepare the manipulations and measurements. Secondly the computerized light microscope with the vision system inspecting the scene and getting the images of the specimen. Newly developed vision algorithms are used to analyze micro structures in the scene corresponding to the known a priori model. This vision is extracting position and shape of the objects and then transmitted as feedback to the user of the VRME-system to update his virtual environment. The internet demon is the third part of the system and distributes the information about the position of the micro structures, their shape and the images to the connected users who themselves may interact with the microscope (turn and displace the specimen on the back of a moving platform, or adding their structures to the scene and compare). The key idea behind our project VRME is to use the intuitiveness and the 3D visualization of VR environments coupled with a vision system to perform measurements of micro structures at a high accuracy. The direct

  20. 3D Membrane Imaging and Porosity Visualization

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  1. Neural Network Based 3D Surface Reconstruction

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  2. 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping of......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....... treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  3. Backhoe 3D "gold standard" image

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  4. High-resolution non-invasive 3D imaging of paint microstructure by synchrotron-based X-ray laminography

    Reischig, Péter; Helfen, Lukas; Wallert, Arie; Baumbach, Tilo; Dik, Joris

    2013-06-01

    The characterisation of the microstructure and micromechanical behaviour of paint is key to a range of problems related to the conservation or technical art history of paintings. Synchrotron-based X-ray laminography is demonstrated in this paper to image the local sub-surface microstructure in paintings in a non-invasive and non-destructive way. Based on absorption and phase contrast, the method can provide high-resolution 3D maps of the paint stratigraphy, including the substrate, and visualise small features, such as pigment particles, voids, cracks, wood cells, canvas fibres etc. Reconstructions may be indicative of local density or chemical composition due to increased attenuation of X-rays by elements of higher atomic number. The paint layers and their interfaces can be distinguished via variations in morphology or composition. Results of feasibility tests on a painting mockup (oak panel, chalk ground, vermilion and lead white paint) are shown, where lateral and depth resolution of up to a few micrometres is demonstrated. The method is well adapted to study the temporal evolution of the stratigraphy in test specimens and offers an alternative to destructive sampling of original works of art.

  5. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  6. High-resolution non-invasive 3D imaging of paint microstructure by synchrotron-based X-ray laminography

    Reischig, Peter [Karlsruhe Institute of Technology, Institute for Photon Science and Synchrotron Radiation, Eggenstein-Leopoldshafen (Germany); Delft University of Technology, Department of Materials Science and Engineering, Delft (Netherlands); Helfen, Lukas [Karlsruhe Institute of Technology, Institute for Photon Science and Synchrotron Radiation, Eggenstein-Leopoldshafen (Germany); European Synchrotron Radiation Facility, BP 220, Grenoble Cedex (France); Wallert, Arie [Rijksmuseum, Postbus 74888, Amsterdam (Netherlands); Baumbach, Tilo [Karlsruhe Institute of Technology, Institute for Photon Science and Synchrotron Radiation, Eggenstein-Leopoldshafen (Germany); Dik, Joris [Delft University of Technology, Department of Materials Science and Engineering, Delft (Netherlands)

    2013-06-15

    The characterisation of the microstructure and micromechanical behaviour of paint is key to a range of problems related to the conservation or technical art history of paintings. Synchrotron-based X-ray laminography is demonstrated in this paper to image the local sub-surface microstructure in paintings in a non-invasive and non-destructive way. Based on absorption and phase contrast, the method can provide high-resolution 3D maps of the paint stratigraphy, including the substrate, and visualise small features, such as pigment particles, voids, cracks, wood cells, canvas fibres etc. Reconstructions may be indicative of local density or chemical composition due to increased attenuation of X-rays by elements of higher atomic number. The paint layers and their interfaces can be distinguished via variations in morphology or composition. Results of feasibility tests on a painting mockup (oak panel, chalk ground, vermilion and lead white paint) are shown, where lateral and depth resolution of up to a few micrometres is demonstrated. The method is well adapted to study the temporal evolution of the stratigraphy in test specimens and offers an alternative to destructive sampling of original works of art. (orig.)

  7. High-resolution non-invasive 3D imaging of paint microstructure by synchrotron-based X-ray laminography

    The characterisation of the microstructure and micromechanical behaviour of paint is key to a range of problems related to the conservation or technical art history of paintings. Synchrotron-based X-ray laminography is demonstrated in this paper to image the local sub-surface microstructure in paintings in a non-invasive and non-destructive way. Based on absorption and phase contrast, the method can provide high-resolution 3D maps of the paint stratigraphy, including the substrate, and visualise small features, such as pigment particles, voids, cracks, wood cells, canvas fibres etc. Reconstructions may be indicative of local density or chemical composition due to increased attenuation of X-rays by elements of higher atomic number. The paint layers and their interfaces can be distinguished via variations in morphology or composition. Results of feasibility tests on a painting mockup (oak panel, chalk ground, vermilion and lead white paint) are shown, where lateral and depth resolution of up to a few micrometres is demonstrated. The method is well adapted to study the temporal evolution of the stratigraphy in test specimens and offers an alternative to destructive sampling of original works of art. (orig.)

  8. 2D and 3D Refraction Based X-ray Imaging Suitable for Clinical and Pathological Diagnosis

    Ando, Masami; Bando, Hiroko; Chen, Zhihua; Chikaura, Yoshinori; Choi, Chang-Hyuk; Endo, Tokiko; Esumi, Hiroyasu; Gang, Li; Hashimoto, Eiko; Hirano, Keiichi; Hyodo, Kazuyuki; Ichihara, Shu; Jheon, SangHoon; Kim, HongTae; Kim, JongKi; Kimura, Tatsuro; Lee, ChangHyun; Maksimenko, Anton; Ohbayashi, Chiho; Park, SungHwan; Shimao, Daisuke; Sugiyama, Hiroshi; Tang, Jintian; Ueno, Ei; Yamasaki, Katsuhito; Yuasa, Tetsuya

    2007-01-01

    The first observation of micro papillary (MP) breast cancer by x-ray dark-field imaging (XDFI) and the first observation of the 3D x-ray internal structure of another breast cancer, ductal carcinoma in-situ (DCIS), are reported. The specimen size for the sheet-shaped MP was 26 mm × 22 mm × 2.8 mm, and that for the rod-shaped DCIS was 3.6 mm in diameter and 4.7 mm in height. The experiment was performed at the Photon Factory, KEK: High Energy Accelerator Research Organization. We achieved a high-contrast x-ray image by adopting a thickness-controlled transmission-type angular analyzer that allows only refraction components from the object for 2D imaging. This provides a high-contrast image of cancer-cell nests, cancer cells and stroma. For x-ray 3D imaging, a new algorithm due to the refraction for x-ray CT was created. The angular information was acquired by x-ray optics diffraction-enhanced imaging (DEI). The number of data was 900 for each reconstruction. A reconstructed CT image may include ductus lactiferi, micro calcification and the breast gland. This modality has the possibility to open up a new clinical and pathological diagnosis using x-ray, offering more precise inspection and detection of early signs of breast cancer.

  9. 2D and 3D Refraction Based X-ray Imaging Suitable for Clinical and Pathological Diagnosis

    The first observation of micro papillary (MP) breast cancer by x-ray dark-field imaging (XDFI) and the first observation of the 3D x-ray internal structure of another breast cancer, ductal carcinoma in-situ (DCIS), are reported. The specimen size for the sheet-shaped MP was 26 mm x 22 mm x 2.8 mm, and that for the rod-shaped DCIS was 3.6 mm in diameter and 4.7 mm in height. The experiment was performed at the Photon Factory, KEK: High Energy Accelerator Research Organization. We achieved a high-contrast x-ray image by adopting a thickness-controlled transmission-type angular analyzer that allows only refraction components from the object for 2D imaging. This provides a high-contrast image of cancer-cell nests, cancer cells and stroma. For x-ray 3D imaging, a new algorithm due to the refraction for x-ray CT was created. The angular information was acquired by x-ray optics diffraction-enhanced imaging (DEI). The number of data was 900 for each reconstruction. A reconstructed CT image may include ductus lactiferi, micro calcification and the breast gland. This modality has the possibility to open up a new clinical and pathological diagnosis using x-ray, offering more precise inspection and detection of early signs of breast cancer

  10. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  11. A 3D image analysis tool for SPECT imaging

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  12. 3D-LSI technology for image sensor

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  13. 3D IMAGING USING COHERENT SYNCHROTRON RADIATION

    Peter Cloetens

    2011-05-01

    Full Text Available Three dimensional imaging is becoming a standard tool for medical, scientific and industrial applications. The use of modem synchrotron radiation sources for monochromatic beam micro-tomography provides several new features. Along with enhanced signal-to-noise ratio and improved spatial resolution, these include the possibility of quantitative measurements, the easy incorporation of special sample environment devices for in-situ experiments, and a simple implementation of phase imaging. These 3D approaches overcome some of the limitations of 2D measurements. They require new tools for image analysis.

  14. A generic synthetic image generator package for the evaluation of 3D Digital Image Correlation and other computer vision-based measurement techniques

    Garcia, Dorian; Orteu, Jean-José; Robert, Laurent; Wattrisse, Bertrand; Bugarin, Florian

    2013-01-01

    Stereo digital image correlation (also called 3D DIC) is a common measurement technique in experimental mechanics for measuring 3D shapes or 3D displacement/strain fields, in research laboratories as well as in industry. Nevertheless, like most of the optical full-field measurement techniques, 3D DIC suffers from a lack of information about its metrological performances. For the 3D DIC technique to be fully accepted as a standard measurement technique it is of key importance to assess its mea...

  15. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging

    Tseng, Hubert; Gage, Jacob A.; Shen, Tsaiwei; Haisler, William L.; Neeley, Shane K.; Shiao, Sue; Chen, Jianbo; Desai, Pujan K.; Liao, Angela; Hebel, Chris; Raphael, Robert M.; Becker, Jeanne L.; Souza, Glauco R.

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5′-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (−control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z’ = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  16. Low cost image-based modeling techniques for archaeological heritage digitalization: more than just a good tool for 3d visualization?

    Mariateresa Galizia; Cettina Santagati

    2013-01-01

    This study shows the first results of a research aimed at catching the potentiality of a series of low cost, free and open source tools (such as ARC3D, 123D Catch, Hypr3D). These tools are founded on the SfM (Structure from Motion) techniques and they are able to realize automatically image-based models starting from simple sequences of pictures data sets. Initially born as simple touristic 3D visualization (e.g. Photosynth) of archaeological and/or architectural sites or cultural assets (e.g...

  17. Integral imaging-based large-scale full-color 3-D display of holographic data by using a commercial LCD panel.

    Dong, Xiao-Bin; Ai, Ling-Yu; Kim, Eun-Soo

    2016-02-22

    We propose a new type of integral imaging-based large-scale full-color three-dimensional (3-D) display of holographic data based on direct ray-optical conversion of holographic data into elemental images (EIs). In the proposed system, a 3-D scene is modeled as a collection of depth-sliced object images (DOIs), and three-color hologram patterns for that scene are generated by interfering each color DOI with a reference beam, and summing them all based on Fresnel convolution integrals. From these hologram patterns, full-color DOIs are reconstructed, and converted into EIs using a ray mapping-based direct pickup process. These EIs are then optically reconstructed to be a full-color 3-D scene with perspectives on the depth-priority integral imaging (DPII)-based 3-D display system employing a large-scale LCD panel. Experiments with a test video confirm the feasibility of the proposed system in the practical application fields of large-scale holographic 3-D displays. PMID:26907021

  18. Improving Segmentation of 3D Retina Layers Based on Graph Theory Approach for Low Quality OCT Images

    Stankiewicz Agnieszka

    2016-06-01

    Full Text Available This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT. Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.

  19. View-based 3-D object retrieval

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  20. BM3D Frames and Variational Image Deblurring

    Danielyan, Aram; Egiazarian, Karen

    2011-01-01

    A family of the Block Matching 3-D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patch-wise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem: one given by minimization of the single objective function and another based on the Nash equilibrium balance of two objective functions. The latter results in an algorithm where the denoising and deblurring operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the Nash equilibrium formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool.

  1. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  2. Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching

    The registration of a three-dimensional (3D) ultrasound (US) image with a computed tomography (CT) or magnetic resonance image is beneficial in various clinical applications such as diagnosis and image-guided intervention of the liver. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment, and the success of this process strongly depends on the proper selection of initial transformation parameters. In this paper, we present an automatic feature-based affine registration procedure of 3D intra-operative US and pre-operative CT images of the liver. In the registration procedure, we first segment vessel lumens and the liver surface from a 3D B-mode US image. We then automatically estimate an initial registration transformation by using the proposed edge matching algorithm. The algorithm finds the most likely correspondences between the vessel centerlines of both images in a non-iterative manner based on a modified Viterbi algorithm. Finally, the registration is iteratively refined on the basis of the global affine transformation by jointly using the vessel and liver surface information. The proposed registration algorithm is validated on synthesized datasets and 20 clinical datasets, through both qualitative and quantitative evaluations. Experimental results show that automatic registration can be successfully achieved between 3D B-mode US and CT images even with a large initial misalignment.

  3. Micromachined Ultrasonic Transducers for 3-D Imaging

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... capable of producing 62+62-element row-column addressed CMUT arrays with negligible charging issues. The arrays include an integrated apodization, which reduces the ghost echoes produced by the edge waves in such arrays by 15:8 dB. The acoustical cross-talk is measured on fabricated arrays, showing a 24 d...

  4. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr [Department of Electrical Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Lee, Jae Young [Department of Radiology, Seoul National University Hospital, Seoul 110-744 (Korea, Republic of)

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  5. Image-Based and Range-Based 3d Modelling of Archaeological Cultural Heritage: the Telamon of the Temple of Olympian ZEUS in Agrigento (italy)

    Lo Brutto, M.; Spera, M. G.

    2011-09-01

    The Temple of Olympian Zeus in Agrigento (Italy) was one of the largest temple and at the same time one of the most original of all the Greek architecture. We don't know exactly how it was because the temple is now almost completely destroyed but it is very well-known for the presence of the Telamons. The Telamons were giant statues (about 8 meters high) probably located outside the temple to fill the interval between the columns. In accordance with the theory most accredited by archaeologists the Telamons were a decorative element and also a support for the structure. However, this hypothesis has never been scientifically proven. One Telamon has been reassembled and is shown at the Archaeological Museum of Agrigento. In 2009 a group of researchers at the University of Palermo has begun a study to test the hypothesis that the Telamons support the weight of the upper part of the temple. The study consists of a 3D survey of the Telamon, to reconstruct a detailed 3D digital model, and of a structural analysis with the Finite Element Method (FEM) to test the possibility that the Telamon could to support the weight of the upper portion of the temple. In this work the authors describe the 3D survey of Telamon carry out with Range-Based Modelling (RBM) and Image-Based Modeling (IBM). The RBM was performed with a TOF laser scanner while the IBM with the ZScan system of Menci Software and Image Master of Topcon. Several tests were conducted to analyze the accuracy of the different 3D models and to evaluate the difference between laser scanning and photogrammetric data. Moreover, an appropriate data reduction to generate a 3D model suitable for FEM analysis was tested.

  6. An image-based approach to the reconstruction of ancient architectures by extracting and arranging 3D spatial components

    Divya Udayan J; HyungSeok KIM; Jee-In KIM

    2015-01-01

    The objective of this research is the rapid reconstruction of ancient buildings of historical importance using a single image. The key idea of our approach is to reduce the infi nite solutions that might otherwise arise when recovering a 3D geometry from 2D photographs. The main outcome of our research shows that the proposed methodology can be used to reconstruct ancient monuments for use as proxies for digital effects in applications such as tourism, games, and entertainment, which do not require very accurate modeling. In this article, we consider the reconstruction of ancient Mughal architecture including the Taj Mahal. We propose a modeling pipeline that makes an easy reconstruction possible using a single photograph taken from a single view, without the need to create complex point clouds from multiple images or the use of laser scanners. First, an initial model is automatically reconstructed using locally fi tted planar primitives along with their boundary polygons and the adjacency relation among parts of the polygons. This approach is faster and more accurate than creating a model from scratch because the initial reconstruction phase provides a set of structural information together with the adjacency relation, which makes it possible to estimate the approximate depth of the entire structural monument. Next, we use manual extrapolation and editing techniques with modeling software to assemble and adjust different 3D components of the model. Thus, this research opens up the opportunity for the present generation to experience remote sites of architectural and cultural importance through virtual worlds and real-time mobile applications. Variations of a recreated 3D monument to represent an amalgam of various cultures are targeted for future work.

  7. Low cost image-based modeling techniques for archaeological heritage digitalization: more than just a good tool for 3d visualization?

    Mariateresa Galizia

    2013-11-01

    Full Text Available This study shows the first results of a research aimed at catching the potentiality of a series of low cost, free and open source tools (such as ARC3D, 123D Catch, Hypr3D. These tools are founded on the SfM (Structure from Motion techniques and they are able to realize automatically image-based models starting from simple sequences of pictures data sets. Initially born as simple touristic 3D visualization (e.g. Photosynth of archaeological and/or architectural sites or cultural assets (e.g. statues, fountains and so on, nowadays allow to reconstruct impressive photorealistic 3D models in short time and at very low costs. Therefore we have chosen different case studies with various levels of complexity (from the statues to the architectures in order to start a first testing on the modeling potentiality of these tools.

  8. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  9. Accuracy and inter-observer variability of 3D versus 4D cone-beam CT based image-guidance in SBRT for lung tumors

    Sweeney Reinhart A

    2012-06-01

    Full Text Available Abstract Background To analyze the accuracy and inter-observer variability of image-guidance (IG using 3D or 4D cone-beam CT (CBCT technology in stereotactic body radiotherapy (SBRT for lung tumors. Materials and methods Twenty-one consecutive patients treated with image-guided SBRT for primary and secondary lung tumors were basis for this study. A respiration correlated 4D-CT and planning contours served as reference for all IG techniques. Three IG techniques were performed independently by three radiation oncologists (ROs and three radiotherapy technicians (RTTs. Image-guidance using respiration correlated 4D-CBCT (IG-4D with automatic registration of the planning 4D-CT and the verification 4D-CBCT was considered gold-standard. Results were compared with two IG techniques using 3D-CBCT: 1 manual registration of the planning internal target volume (ITV contour and the motion blurred tumor in the 3D-CBCT (IG-ITV; 2 automatic registration of the planning reference CT image and the verification 3D-CBCT (IG-3D. Image quality of 3D-CBCT and 4D-CBCT images was scored on a scale of 1–3, with 1 being best and 3 being worst quality for visual verification of the IGRT results. Results Image quality was scored significantly worse for 3D-CBCT compared to 4D-CBCT: the worst score of 3 was given in 19 % and 7.1 % observations, respectively. Significant differences in target localization were observed between 4D-CBCT and 3D-CBCT based IG: compared to the reference of IG-4D, tumor positions differed by 1.9 mm ± 0.9 mm (3D vector on average using IG-ITV and by 3.6 mm ± 3.2 mm using IG-3D; results of IG-ITV were significantly closer to the reference IG-4D compared to IG-3D. Differences between the 4D-CBCT and 3D-CBCT techniques increased significantly with larger motion amplitude of the tumor; analogously, differences increased with worse 3D-CBCT image quality scores. Inter-observer variability was largest in SI direction and was

  10. 3D Wavelet-Based Filter and Method

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  11. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  12. Recovering 3D human pose from monocular images

    Agarwal, Ankur; Triggs, Bill

    2006-01-01

    We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We eva...

  13. Individualized directional microphone optimization in hearing aids based on reconstructing the 3D geometry of the head and ear from 2D images

    Harder, Stine; Paulsen, Rasmus Reinhold

    2015-01-01

    The goal of this thesis is to improve intelligibility for hearing-aid users by individualizing the directional microphone in a hearing aid. The general idea is a three step pipeline for easy acquisition of individually optimized directional filters. The first step is to estimate an individual 3D head model based on 2D images, the second step is to simulate individual head related transfer functions (HRTFs) based on the estimated 3D head model and the final step is to calculate optimal directi...

  14. Hybrid segmentation framework for 3D medical image analysis

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  15. Morphological image processing operators. Reduction of partial volume effects to improve 3D visualization based on CT data

    Aim: The quality of segmentation and three-dimensional reconstruction of anatomical structures in tomographic slices is often impaired by disturbances due to partial volume effects (PVE). The potential for artefact reduction by use of the morphological image processing operators (MO) erosion and dilation is investigated. Results: For all patients under review, the artefacts caused by PVE were significantly reduced by erosion (lung: Mean SBRpre=1.67, SBRpost=4.83; brain: SBRpre=1.06, SBRpost=1.29) even with only a small number of iterations. Region dilation was applied to integrate further structures (e.g. at tumor borders) into a configurable neighbourhood for segmentation and quantitative analysis. Conclusions: The MO represent an efficient approach for the reduction of PVE artefacts in 3D-CT reconstructions and allow optimised visualization of individual objects. (orig./AJ)

  16. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  17. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  18. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    Bertrand, M.M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Macri, F.; Beregi, J.P. [Nimes University Hospital, University Montpellier 1, Radiology Department, Nimes (France); Mazars, R.; Prudhomme, M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Nimes University Hospital, University Montpellier 1, Digestive Surgery Department, Nimes (France); Droupy, S. [Nimes University Hospital, University Montpellier 1, Urology-Andrology Department, Nimes (France)

    2014-08-15

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  19. A frequency-based approach to locate common structure for 2D-3D intensity-based registration of setup images in prostate radiotherapy

    In many radiotherapy clinics, geometric uncertainties in the delivery of 3D conformal radiation therapy and intensity modulated radiation therapy of the prostate are reduced by aligning the patient's bony anatomy in the planning 3D CT to corresponding bony anatomy in 2D portal images acquired before every treatment fraction. In this paper, we seek to determine if there is a frequency band within the portal images and the digitally reconstructed radiographs (DRRs) of the planning CT in which bony anatomy predominates over non-bony anatomy such that portal images and DRRs can be suitably filtered to achieve high registration accuracy in an automated 2D-3D single portal intensity-based registration framework. Two similarity measures, mutual information and the Pearson correlation coefficient were tested on carefully collected gold-standard data consisting of a kilovoltage cone-beam CT (CBCT) and megavoltage portal images in the anterior-posterior (AP) view of an anthropomorphic phantom acquired under clinical conditions at known poses, and on patient data. It was found that filtering the portal images and DRRs during the registration considerably improved registration performance. Without filtering, the registration did not always converge while with filtering it always converged to an accurate solution. For the pose-determination experiments conducted on the anthropomorphic phantom with the correlation coefficient, the mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters were θx:0.18(0.19) deg., θy:0.04(0.04) deg., θz:0.04(0.02) deg., tx:0.14(0.15) mm, ty:0.09(0.05) mm, and tz:0.49(0.40) mm. The mutual information-based registration with filtered images also resulted in similarly small errors. For the patient data, visual inspection of the superimposed registered images showed that they were correctly aligned in all instances. The results presented in this paper suggest that robust and accurate registration

  20. Performance evaluation of CCD- and mobile-phone-based near-infrared fluorescence imaging systems with molded and 3D-printed phantoms

    Wang, Bohan; Ghassemi, Pejhman; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2016-03-01

    Increasing numbers of devices are emerging which involve biophotonic imaging on a mobile platform. Therefore, effective test methods are needed to ensure that these devices provide a high level of image quality. We have developed novel phantoms for performance assessment of near infrared fluorescence (NIRF) imaging devices. Resin molding and 3D printing techniques were applied for phantom fabrication. Comparisons between two imaging approaches - a CCD-based scientific camera and an NIR-enabled mobile phone - were made based on evaluation of the contrast transfer function and penetration depth. Optical properties of the phantoms were evaluated, including absorption and scattering spectra and fluorescence excitation-emission matrices. The potential viability of contrastenhanced biological NIRF imaging with a mobile phone is demonstrated, and color-channel-specific variations in image quality are documented. Our results provide evidence of the utility of novel phantom-based test methods for quantifying image quality in emerging NIRF devices.

  1. Validity of computational hemodynamics in human arteries based on 3D time-of-flight MR angiography and 2D electrocardiogram gated phase contrast images

    Yu, Huidan (Whitney); Chen, Xi; Chen, Rou; Wang, Zhiqiang; Lin, Chen; Kralik, Stephen; Zhao, Ye

    2015-11-01

    In this work, we demonstrate the validity of 4-D patient-specific computational hemodynamics (PSCH) based on 3-D time-of-flight (TOF) MR angiography (MRA) and 2-D electrocardiogram (ECG) gated phase contrast (PC) images. The mesoscale lattice Boltzmann method (LBM) is employed to segment morphological arterial geometry from TOF MRA, to extract velocity profiles from ECG PC images, and to simulate fluid dynamics on a unified GPU accelerated computational platform. Two healthy volunteers are recruited to participate in the study. For each volunteer, a 3-D high resolution TOF MRA image and 10 2-D ECG gated PC images are acquired to provide the morphological geometry and the time-varying flow velocity profiles for necessary inputs of the PSCH. Validation results will be presented through comparisons of LBM vs. 4D Flow Software for flow rates and LBM simulation vs. MRA measurement for blood flow velocity maps. Indiana University Health (IUH) Values Fund.

  2. Photogrammetric 3D reconstruction using mobile imaging

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  3. Octree-based Robust Watermarking for 3D Model

    Su Cai

    2011-02-01

    Full Text Available Three robust blind watermarking methods of 3D models based on Octree are proposed in this paper: OTC-W, OTP-W and Zero-W. Primary Component Analysis and Octree partition are used on 3D meshes. A scrambled binary image for OTC-W and a scrambled RGB image for OTP-W are separately embedded adaptively into the single child nodes at the bottom level of Octree structure. The watermark can be extracted without the original image and 3D model. Those two methods have high embedding capacity for 3D meshes. Meanwhile, they are robust against geometric transformation (like translation, rotation, uniform scaling and vertex reordering attacks. For Zero-W, higher nodes of Octree are used to construct ‘Zero-watermark’, which can resist simplification, noise and remeshing attacks. All those three methods are fit for 3D point cloud data and arbitrary 3D meshes.Three robust blind watermarking methods of 3D models based on Octree are proposed in this paper: OTC-W, OTP-W and Zero-W. Primary Component Analysis and Octree partition are used on 3D meshes. A scrambled binary image for OTC-W and a scrambled RGB image for OTP-W are separately embedded adaptively into the single child nodes at the bottom level of Octree structure. The watermark can be extracted without the original image and 3D model. Those two methods have high embedding capacity for 3D meshes. Meanwhile, they are robust against geometric transformation (like translation, rotation, uniform scaling and vertex reordering attacks. For Zero-W, higher nodes of Octree are used to construct ‘Zero-watermark’, which can resist simplification, noise and remeshing attacks. All those three methods are fit for 3D point cloud data and arbitrary 3D meshes.

  4. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    Wang, Shenghao; Zhang, Kai; Wang, Zhili; Gao, Kun; Wu, Zhao; Zhu, Peiping; Wu, Ziyu

    2014-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly and fast software package based on LabVIEW that may allow to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors...

  5. Design, fabrication, and implementation of voxel-based 3D printed textured phantoms for task-based image quality assessment in CT

    Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan

    2016-03-01

    In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2

  6. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  7. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  8. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm in a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential

  9. Review: Polymeric-Based 3D Printing for Tissue Engineering

    Wu, Geng-Hsi; Hsu, Shan-hui

    2015-01-01

    Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue ...

  10. Automated curved planar reformation of 3D spine images

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  11. Handbook of 3D machine vision optical metrology and imaging

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  12. Imaging- and Flow Cytometry-based Analysis of Cell Position and the Cell Cycle in 3D Melanoma Spheroids.

    Beaumont, Kimberley A; Anfosso, Andrea; Ahmed, Farzana; Weninger, Wolfgang; Haass, Nikolas K

    2015-01-01

    Three-dimensional (3D) tumor spheroids are utilized in cancer research as a more accurate model of the in vivo tumor microenvironment, compared to traditional two-dimensional (2D) cell culture. The spheroid model is able to mimic the effects of cell-cell interaction, hypoxia and nutrient deprivation, and drug penetration. One characteristic of this model is the development of a necrotic core, surrounded by a ring of G1 arrested cells, with proliferating cells on the outer layers of the spheroid. Of interest in the cancer field is how different regions of the spheroid respond to drug therapies as well as genetic or environmental manipulation. We describe here the use of the fluorescence ubiquitination cell cycle indicator (FUCCI) system along with cytometry and image analysis using commercial software to characterize the cell cycle status of cells with respect to their position inside melanoma spheroids. These methods may be used to track changes in cell cycle status, gene/protein expression or cell viability in different sub-regions of tumor spheroids over time and under different conditions. PMID:26779761

  13. Progress in 3D imaging and display by integral imaging

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  14. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  15. Perception of detail in 3D images

    Heyndrickx, I.; Kaptein, R.

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads t

  16. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction.

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-03-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  17. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  18. A neural network-based 2D/3D image registration quality evaluator for pediatric patient setup in external beam radiotherapy.

    Wu, Jian; Su, Zhong; Li, Zuofeng

    2016-01-01

    Our purpose was to develop a neural network-based registration quality evaluator (RQE) that can improve the 2D/3D image registration robustness for pediatric patient setup in external beam radiotherapy. Orthogonal daily setup X-ray images of six pediatric patients with brain tumors receiving proton therapy treatments were retrospectively registered with their treatment planning computed tomography (CT) images. A neural network-based pattern classifier was used to determine whether a registration solution was successful based on geometric features of the similarity measure values near the point-of-solution. Supervised training and test datasets were generated by rigidly registering a pair of orthogonal daily setup X-ray images to the treatment planning CT. The best solution for each registration task was selected from 50 optimizing attempts that differed only by the randomly generated initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error tolerance to determine whether that solution was acceptable. A supervised training was then used to train the RQE. Performance of the RQE was evaluated using test dataset consisting of registration results that were not used in training. The RQE was integrated with our in-house 2D/3D registration system and its performance was evaluated using the same patient dataset. With an optimized sampling step size (i.e., 5 mm) in the feature space, the RQE has the sensitivity and the speci-ficity in the ranges of 0.865-0.964 and 0.797-0.990, respectively, when used to detect registration error with mean voxel displacement (MVD) greater than 1 mm. The trial-to-acceptance ratio of the integrated 2D/3D registration system, for all patients, is equal to 1.48. The final acceptance ratio is 92.4%. The proposed RQE can potentially be used in a 2D/3D rigid image registration system to improve the overall robustness by rejecting

  19. An automated 3D reconstruction method of UAV images

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  20. Extracting 3D layout from a single image using global image structures.

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  1. A new image reconstruction method for 3-D PET based upon pairs of near-missing lines of response

    We formerly introduced a new image reconstruction method for three-dimensional positron emission tomography, which is based upon pairs of near-missing lines of response. This method uses an elementary geometric property of lines of response, namely that two lines of response which originate from radioactive isotopes located within a sufficiently small voxel, will lie within a few millimeters of each other. The effectiveness of this method was verified by performing a simulation using GATE software and a digital Hoffman phantom

  2. 3D wavefront image formation for NIITEK GPR

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  3. Practical pseudo-3D registration for large tomographic images

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  4. Thermomechanical behaviour of two heterogeneous tungsten materials via 2D and 3D image-based FEM

    An advanced numerical procedure based on imaging of the material microstructure (Image- Based Finite Element Method or Image-Based FEM) was extended and applied to model the thermomechanical behaviour of novel materials for fusion applications. Two tungsten based heterogeneous materials with different random morphologies have been chosen as challenging case studies: (1) a two-phase mixed ductile-brittle W/CuCr1Zr composite and (2) vacuum plasma-sprayed tungsten (VPS-W 75 vol.%), a porous coating system with complex dual-scale microstructure. Both materials are designed for the future fusion reactor DEMO: W/CuCr1Zr as main constituent of a layered functionally graded joint between plasma-facing armor and heat sink whereas VPS-W for covering the first wall of the reactor vessel in direct contact with the plasma. The primary focus of this work was to investigate the mesoscopic material behaviour and the linkage to the macroscopic response in modeling failure and heat-transfer. Particular care was taken in validating and integrating simulation findings with experimental inputs. The solution of the local thermomechanical behaviour directly on the real material microstructure enabled meaningful insights into the complex failure mechanism of both materials. For W/CuCr1Zr full macroscopic stress-strain curves including the softening and failure part could be simulated and compared with experimental ones at different temperatures, finding an overall good agreement. The comparison of simulated and experimental macroscopic behaviour of plastic deformation and rupture also showed the possibility to indirectly estimate micro- and mesoscale material parameters. Both heat conduction and elastic behaviour of VPS-W have been extensively investigated. New capabilities of the Image-Based FEM could be shown: decomposition of the heat transfer reduction as due to the individual morphological phases and back-fitting of the reduced stiffness at interlamellar boundaries. The

  5. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  6. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  7. A Legendre orthogonal moment based 3D edge operator

    ZHANG Hui; SHU Huazhong; LUO Limin; J. L. Dillenseger

    2005-01-01

    This paper presents a new 3D edge operator based on Legendre orthogonal moments. This operator can be used to extract the edge of 3D object in any window size,with more accurate surface orientation and more precise surface location. It also has full geometry meaning. Process of calculation is considered in the moment based method.We can greatly speed up the computation by calculating out the masks in advance. We integrate this operator into our rendering of medical image data based on ray casting algorithm. Experimental results show that it is an effective 3D edge operator that is more accurate in position and orientation.

  8. 3D acoustic imaging applied to the Baikal neutrino telescope

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  9. 3D acoustic imaging applied to the Baikal neutrino telescope

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  10. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  11. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1-14C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in any

  12. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    Lee, Jae Sung; Park, Kwang Suk [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea); Seoul National University College of Medicine, Department of Biomedical Engineering, Seoul (Korea); Ahn, Soon-Hyun; Oh, Seung Ha; Kim, Chong Sun; Chung, June-Key; Lee, Myung Chul [Seoul National University College of Medicine, Department of Otolaryngology, Head and Neck Surgery, Seoul (Korea); Lee, Dong Soo; Jeong, Jae Min [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea)

    2005-06-01

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1-{sup 14}C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in

  13. A Primitive-Based 3D Object Recognition System

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  14. 3D imaging of semiconductor components by discrete laminography

    Batenburg, Joost; Palenstijn, W.J.; Sijbers, J.

    2014-01-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the ...

  15. Phenotypic transition maps of 3D breast acini obtained by imaging-guided agent-based modeling

    Tang, Jonathan; Enderling, Heiko; Becker-Weimann, Sabine; Pham, Christopher; Polyzos, Aris; Chen, Chen-Yi; Costes, Sylvain V

    2011-02-18

    We introduce an agent-based model of epithelial cell morphogenesis to explore the complex interplay between apoptosis, proliferation, and polarization. By varying the activity levels of these mechanisms we derived phenotypic transition maps of normal and aberrant morphogenesis. These maps identify homeostatic ranges and morphologic stability conditions. The agent-based model was parameterized and validated using novel high-content image analysis of mammary acini morphogenesis in vitro with focus on time-dependent cell densities, proliferation and death rates, as well as acini morphologies. Model simulations reveal apoptosis being necessary and sufficient for initiating lumen formation, but cell polarization being the pivotal mechanism for maintaining physiological epithelium morphology and acini sphericity. Furthermore, simulations highlight that acinus growth arrest in normal acini can be achieved by controlling the fraction of proliferating cells. Interestingly, our simulations reveal a synergism between polarization and apoptosis in enhancing growth arrest. After validating the model with experimental data from a normal human breast line (MCF10A), the system was challenged to predict the growth of MCF10A where AKT-1 was overexpressed, leading to reduced apoptosis. As previously reported, this led to non growth-arrested acini, with very large sizes and partially filled lumen. However, surprisingly, image analysis revealed a much lower nuclear density than observed for normal acini. The growth kinetics indicates that these acini grew faster than the cells comprising it. The in silico model could not replicate this behavior, contradicting the classic paradigm that ductal carcinoma in situ is only the result of high proliferation and low apoptosis. Our simulations suggest that overexpression of AKT-1 must also perturb cell-cell and cell-ECM communication, reminding us that extracellular context can dictate cellular behavior.

  16. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    Sturm, Peter; Maybank, Steve

    1999-01-01

    We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  17. Contour-based 3d motion recovery while zooming

    Martínez Marroquín, Elisa; Torras Genís, Carme

    2003-01-01

    This paper considers the problem of 3D motion recovery from a sequence of monocular images while zooming. Unlike the common trend based on point matches, the proposed method relies on the deformation of an active contour fitted to a reference object. We derive the relation between the contour deformation and the 3D motion components, assuming time-varying focal length and principal point. This relation allows us to present a method to extract the rotation matrix and the scaled translation alo...

  18. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant'Omobono and Sarno project's strategies

    Gianolio, S.; Mermati, F.; Genovese, G.

    2014-06-01

    This paper presents a "standard" method that is being developed by ARESlab of Rome's La Sapienza University for the documentation and the representation of the archaeological artifacts and structures through automatic photogrammetry software. The image-based 3D modeling technique was applied in two projects: in Sarno and in Rome. The first is a small city in Campania region along Via Popilia, known as the ancient way from Capua to Rhegion. The interest in this city is based on the recovery of over 2100 tombs from local necropolis that contained more than 100.000 artifacts collected in "Museo Nazionale Archeologico della Valle del Sarno". In Rome the project regards the archaeological area of Insula Volusiana placed in Forum Boarium close to Sant'Omobono sacred area. During the studies photographs were taken by Canon EOS 5D Mark II and Canon EOS 600D cameras. 3D model and meshes were created in Photoscan software. The TOF-CW Z+F IMAGER® 5006h laser scanner is used to dense data collection of archaeological area of Rome and to make a metric comparison between range-based and image-based techniques. In these projects the IBM as a low-cost technique proved to be a high accuracy improvement if planned correctly and it shown also how it helps to obtain a relief of complex strata and architectures compared to traditional manual documentation methods (e.g. two-dimensional drawings). The multidimensional recording can be used for future studies of the archaeological heritage, especially for the "destructive" character of an excavation. The presented methodology is suitable for the 3D registration and the accuracy of the methodology improved also the scientific value.

  19. Combining different modalities for 3D imaging of biological objects

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  20. Combining Different Modalities for 3D Imaging of Biological Objects

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  1. Morphometrics, 3D Imaging, and Craniofacial Development.

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  2. MR-based tridirectional flow imaging. Acquisition and 3D analysis of flows in the thoracic aorta; MRT-basierte tridirektionale Flussbildgebung. Aufnahme und 3D-Analyse von Stroemungen in der thorakalen Aorta

    Unterhinninghofen, R. [Universitaet Karlsruhe, Institut fuer Technische Informatik, Karlsruhe (Germany); Deutsches Krebsforschungszentrum Heidelberg, Abteilung Radiologie (E010), Heidelberg (Germany); Ley, S. [Deutsches Krebsforschungszentrum Heidelberg, Abteilung Radiologie (E010), Heidelberg (Germany); Universitaetskinderklinik Heidelberg, Paediatrische Radiologie, Heidelberg (Germany); Frydrychowicz, A.; Markl, M. [Universitaetsklinikum Freiburg, Abteilung Roentgendiagnostik, Medizin Physik, Freiburg (Germany)

    2007-11-15

    Tridirectional MR flow imaging is a novel method that extends the well-established technique of phase-contrast flow measurement by vectorial velocity encoding, i.e., by encoding in all three spatial directions. Modern sequence protocols allow the acquisition of velocity vector fields with high spatial resolutions of 1-3 mm and temporal resolutions of 20-50 ms over the heart cycle. Using navigating techniques, data on the entire thoracic aorta can be acquired within about 20 min in free breathing. The subsequent computer-based data processing includes automatic correction of aliasing effects, eddy currents, gradient field inhomogeneities, and Maxwell terms. The data can be visualized in three dimensions using vector arrows, streamlines, or particle traces. The parallel visualization of morphological slices and of the surface of the vascular lumen in 3D enhances spatial and anatomical orientation. Furthermore, quantitative values such as blood flow velocity and volume, vorticity, and vessel wall shear stress can be determined. Modern software systems support the integrated flow-based analysis of typical aortic pathologies such as aneurysms and aortic insufficiency. To what extent this additional information will help us in making better therapeutic decisions needs to be studied in clinical trials. (orig.) [German] Die tridirektionale MRT-Flussbildgebung ist ein junges Verfahren, das die etablierte Phasenkontrastflussmessung um die vektorielle Geschwindigkeitskodierung, also Kodierung in allen 3 Raumrichtungen, erweitert. Moderne Sequenzen erfassen Geschwindigkeitsvektorfelder mit raeumlich hoher Aufloesung von 1-3 mm und ueber den Herzschlag mit einer zeitlichen Aufloesung von 20-50 ms. Dank Navigatortechnik kann die gesamte thorakale Aorta innerhalb von ca. 20 min in freier Atmung aufgenommen werden. Die anschliessende rechnergestuetzte Datenaufbereitung umfasst die automatische Korrektur von Aliasingeffekten, Wirbelstroemen, Gradientenfeldinhomogenitaeten und

  3. Strain-rate sensitivity of foam materials: A numerical study using 3D image-based finite element model

    Sun Yongle

    2015-01-01

    Full Text Available Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi may not be sufficiently representative to capture the actual structural effect. To overcome these limitations, the strain-rate sensitivity of the compressive and tensile properties of closed-cell aluminium Alporas foam is investigated in this study by means of meso-scale realistic finite element (FE simulations. The FE modelling method based on X-ray computed tomography (CT image is introduced first, as well as its applications to foam materials. Then the compression and tension of Alporas foam at a wide variety of applied nominal strain-rates are simulated using FE model constructed from the actual cell geometry obtained from the CT image. The stain-rate sensitivity of compressive strength (collapse stress and tensile strength (0.2% offset yield point are evaluated when considering different cell-wall material properties. The numerical results show that the rate dependence of cell-wall material is the main cause of the strain-rate hardening of the compressive and tensile strengths at low and intermediate strain-rates. When the strain-rate is sufficiently high, shock compression is initiated, which significantly enhances the stress at the loading end and has complicated effect on the stress at the supporting end. The plastic tensile wave effect is evident at high strain-rates, but shock tension cannot develop in Alporas foam due to the softening associated with single fracture process zone occurring in tensile response. In all cases the micro inertia of individual cell walls subjected to localised deformation

  4. Strain-rate sensitivity of foam materials: A numerical study using 3D image-based finite element model

    Sun, Yongle; Li, Q. M.; Withers, P. J.

    2015-09-01

    Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band) of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi) may not be sufficiently representative to capture the actual structural effect. To overcome these limitations, the strain-rate sensitivity of the compressive and tensile properties of closed-cell aluminium Alporas foam is investigated in this study by means of meso-scale realistic finite element (FE) simulations. The FE modelling method based on X-ray computed tomography (CT) image is introduced first, as well as its applications to foam materials. Then the compression and tension of Alporas foam at a wide variety of applied nominal strain-rates are simulated using FE model constructed from the actual cell geometry obtained from the CT image. The stain-rate sensitivity of compressive strength (collapse stress) and tensile strength (0.2% offset yield point) are evaluated when considering different cell-wall material properties. The numerical results show that the rate dependence of cell-wall material is the main cause of the strain-rate hardening of the compressive and tensile strengths at low and intermediate strain-rates. When the strain-rate is sufficiently high, shock compression is initiated, which significantly enhances the stress at the loading end and has complicated effect on the stress at the supporting end. The plastic tensile wave effect is evident at high strain-rates, but shock tension cannot develop in Alporas foam due to the softening associated with single fracture process zone occurring in tensile response. In all cases the micro inertia of individual cell walls subjected to localised deformation is found to

  5. Light field display and 3D image reconstruction

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  6. 3D Imaging with Structured Illumination for Advanced Security Applications

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  7. A novel method for identifying a graph-based representation of 3-D microvascular networks from fluorescence microscopy image stacks.

    Almasi, Sepideh; Xu, Xiaoyin; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L

    2015-02-01

    A novel approach to determine the global topological structure of a microvasculature network from noisy and low-resolution fluorescence microscopy data that does not require the detailed segmentation of the vessel structure is proposed here. The method is most appropriate for problems where the tortuosity of the network is relatively low and proceeds by directly computing a piecewise linear approximation to the vasculature skeleton through the construction of a graph in three dimensions whose edges represent the skeletal approximation and vertices are located at Critical Points (CPs) on the microvasculature. The CPs are defined as vessel junctions or locations of relatively large curvature along the centerline of a vessel. Our method consists of two phases. First, we provide a CP detection technique that, for junctions in particular, does not require any a priori geometric information such as direction or degree. Second, connectivity between detected nodes is determined via the solution of a Binary Integer Program (BIP) whose variables determine whether a potential edge between nodes is or is not included in the final graph. The utility function in this problem reflects both intensity-based and structural information along the path connecting the two nodes. Qualitative and quantitative results confirm the usefulness and accuracy of this method. This approach provides a mean of correctly capturing the connectivity patterns in vessels that are missed by more traditional segmentation and binarization schemes because of imperfections in the images which manifest as dim or broken vessels. PMID:25515433

  8. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  9. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas; Bai, Li

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized...

  10. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method.

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm(2)) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060-0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996-0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  11. 3D augmented reality with integral imaging display

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  12. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition

    Abdulla Al-Rawabdeh

    2016-01-01

    Full Text Available Landslides often cause economic losses, property damage, and loss of lives. Monitoring landslides using high spatial and temporal resolution imagery and the ability to quickly identify landslide regions are the basis for emergency disaster management. This study presents a comprehensive system that uses unmanned aerial vehicles (UAVs and Semi-Global dense Matching (SGM techniques to identify and extract landslide scarp data. The selected study area is located along a major highway in a mountainous region in Jordan, and contains creeping landslides induced by heavy rainfall. Field observations across the slope body and a deformation analysis along the highway and existing gabions indicate that the slope is active and that scarp features across the slope will continue to open and develop new tension crack features, leading to the downward movement of rocks. The identification of landslide scarps in this study was performed via a dense 3D point cloud of topographic information generated from high-resolution images captured using a low-cost UAV and a target-based camera calibration procedure for a low-cost large-field-of-view camera. An automated approach was used to accurately detect and extract the landslide head scarps based on geomorphological factors: the ratio of normalized Eigenvalues (i.e., λ1/λ2 ≥ λ3 derived using principal component analysis, topographic surface roughness index values, and local-neighborhood slope measurements from the 3D image-based point cloud. Validation of the results was performed using root mean square error analysis and a confusion (error matrix between manually digitized landslide scarps and the automated approaches. The experimental results using the fully automated 3D point-based analysis algorithms show that these approaches can effectively distinguish landslide scarps. The proposed algorithms can accurately identify and extract landslide scarps with centimeter-scale accuracy. In addition, the combination

  13. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  14. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J. [Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Bioinformatics, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Computer Science, Rutgers, State University of New Jersey, Piscataway, New Jersey 08854 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States)

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  15. Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays

    Wetzstein, Gordon; Lanman, Douglas R.; Heidrich, Wolfgang; Raskar, Ramesh

    2011-01-01

    We develop tomographic techniques for image synthesis on displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or high-contrast 2D image when illuminated by a uniform backlight. Since arbitrary oblique views may be inconsistent with any single attenuator, iterative tomographic reconstruction minimizes the difference between the emitted and target light fields, subject to physical constraints on attenuation. As multi-layer gen...

  16. Automatic structural matching of 3D image data

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  17. Image Reconstruction from 2D stack of MRI/CT to 3D using Shapelets

    Arathi T

    2014-12-01

    Full Text Available Image reconstruction is an active research field, due to the increasing need for geometric 3D models in movie industry, games, virtual environments and in medical fields. 3D image reconstruction aims to arrive at the 3D model of an object, from its 2D images taken at different viewing angles. Medical images are multimodal, which includes MRI, CT scan image, PET and SPECT images. Of these, MRI and CT scan images of an organ when taken, is available as a stack of 2D images, taken at different angles. This 2D stack of images is used to get a 3D view of the organ of interest, to aid doctors in easier diagnosis. Existing 3D reconstruction techniques are voxel based techniques, which tries to reconstruct the 3D view based on the intensity value stored at each voxel location. These techniques don’t make use of the shape/depth information available in the 2D image stack. In this work, a 3D reconstruction technique for MRI/CT 2D image stack, based on Shapelets has been proposed. Here, the shape/depth information available in each 2D image in the image stack is manipulated to get a 3D reconstruction, which gives a more accurate 3D view of the organ of interest. Experimental results exhibit the efficiency of this proposed technique.

  18. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes. (paper)

  19. Image-Based 3D Treatment Planning for Vaginal Cylinder Brachytherapy: Dosimetric Effects of Bladder Filling on Organs at Risk

    Purpose: To investigate the dosimetric effects of bladder filling on organs at risk (OARs) using three-dimensional image-based treatment planning for vaginal cylinder brachytherapy. Methods and Materials: Twelve patients with endometrial or cervical cancer underwent postoperative high–dose rate vaginal cylinder brachytherapy. For three-dimensional planning, patients were simulated by computed tomography with an indwelling catheter in place (empty bladder) and with 180 mL of sterile water instilled into the bladder (full bladder). The bladder, rectum, sigmoid, and small bowel (OARs) were contoured, and a prescription dose was generated for 10 to 35 Gy in 2 to 5 fractions at the surface or at 5 mm depth. For each OAR, the volume dose was defined by use of two different criteria: the minimum dose value in a 2.0-cc volume receiving the highest dose (D2cc) and the dose received by 50% of the OAR volume (D50%). International Commission on Radiation Units and Measurements (ICRU) bladder and rectum point doses were calculated for comparison. The cylinder-to-bowel distance was measured using the shortest distance from the cylinder apex to the contoured sigmoid or small bowel. Statistical analyses were performed with paired t tests. Results: Mean bladder and rectum D2cc values were lower than their respective ICRU doses. However, differences between D2cc and ICRU doses were small. Empty vs. full bladder did not significantly affect the mean cylinder-to-bowel distance (0.72 vs. 0.92 cm, p = 0.08). In contrast, bladder distention had appreciable effects on bladder and small bowel volume dosimetry. With a full bladder, the mean small bowel D2cc significantly decreased from 677 to 408 cGy (p = 0.004); the mean bladder D2cc did not increase significantly (1,179 cGy vs. 1,246 cGy, p = 0.11). Bladder distention decreased the mean D50% for both the bladder (441 vs. 279 cGy, p = 0.001) and the small bowel (168 vs. 132 cGy, p = 0.001). Rectum and sigmoid volume doses were not

  20. Image-Based 3D Treatment Planning for Vaginal Cylinder Brachytherapy: Dosimetric Effects of Bladder Filling on Organs at Risk

    Hung, Jennifer; Shen Sui; De Los Santos, Jennifer F. [Department of Radiation Oncology, University of Alabama Medical Center, Birmingham, AL (United States); Kim, Robert Y., E-mail: rkim@uabmc.edu [Department of Radiation Oncology, University of Alabama Medical Center, Birmingham, AL (United States)

    2012-07-01

    Purpose: To investigate the dosimetric effects of bladder filling on organs at risk (OARs) using three-dimensional image-based treatment planning for vaginal cylinder brachytherapy. Methods and Materials: Twelve patients with endometrial or cervical cancer underwent postoperative high-dose rate vaginal cylinder brachytherapy. For three-dimensional planning, patients were simulated by computed tomography with an indwelling catheter in place (empty bladder) and with 180 mL of sterile water instilled into the bladder (full bladder). The bladder, rectum, sigmoid, and small bowel (OARs) were contoured, and a prescription dose was generated for 10 to 35 Gy in 2 to 5 fractions at the surface or at 5 mm depth. For each OAR, the volume dose was defined by use of two different criteria: the minimum dose value in a 2.0-cc volume receiving the highest dose (D{sub 2cc}) and the dose received by 50% of the OAR volume (D{sub 50%}). International Commission on Radiation Units and Measurements (ICRU) bladder and rectum point doses were calculated for comparison. The cylinder-to-bowel distance was measured using the shortest distance from the cylinder apex to the contoured sigmoid or small bowel. Statistical analyses were performed with paired t tests. Results: Mean bladder and rectum D{sub 2cc} values were lower than their respective ICRU doses. However, differences between D{sub 2cc} and ICRU doses were small. Empty vs. full bladder did not significantly affect the mean cylinder-to-bowel distance (0.72 vs. 0.92 cm, p = 0.08). In contrast, bladder distention had appreciable effects on bladder and small bowel volume dosimetry. With a full bladder, the mean small bowel D{sub 2cc} significantly decreased from 677 to 408 cGy (p = 0.004); the mean bladder D{sub 2cc} did not increase significantly (1,179 cGy vs. 1,246 cGy, p = 0.11). Bladder distention decreased the mean D{sub 50%} for both the bladder (441 vs. 279 cGy, p = 0.001) and the small bowel (168 vs. 132 cGy, p = 0.001). Rectum

  1. 3D Interpolation Method for CT Images of the Lung

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  2. Development and evaluation of a LOR-based image reconstruction with 3D system response modeling for a PET insert with dual-layer offset crystal design

    In this study we present a method of 3D system response calculation for analytical computer simulation and statistical image reconstruction for a magnetic resonance imaging (MRI) compatible positron emission tomography (PET) insert system that uses a dual-layer offset (DLO) crystal design. The general analytical system response functions (SRFs) for detector geometric and inter-crystal penetration of coincident crystal pairs are derived first. We implemented a 3D ray-tracing algorithm with 4π sampling for calculating the SRFs of coincident pairs of individual DLO crystals. The determination of which detector blocks are intersected by a gamma ray is made by calculating the intersection of the ray with virtual cylinders with radii just inside the inner surface and just outside the outer-edge of each crystal layer of the detector ring. For efficient ray-tracing computation, the detector block and ray to be traced are then rotated so that the crystals are aligned along the X-axis, facilitating calculation of ray/crystal boundary intersection points. This algorithm can be applied to any system geometry using either single-layer (SL) or multi-layer array design with or without offset crystals. For effective data organization, a direct lines of response (LOR)-based indexed histogram-mode method is also presented in this work. SRF calculation is performed on-the-fly in both forward and back projection procedures during each iteration of image reconstruction, with acceleration through use of eight-fold geometric symmetry and multi-threaded parallel computation. To validate the proposed methods, we performed a series of analytical and Monte Carlo computer simulations for different system geometry and detector designs. The full-width-at-half-maximum of the numerical SRFs in both radial and tangential directions are calculated and compared for various system designs. By inspecting the sinograms obtained for different detector geometries, it can be seen that the DLO crystal

  3. Detailed Primitive-Based 3d Modeling of Architectural Elements

    Remondino, F.; Lo Buglio, D.; Nony, N.; De Luca, L.

    2012-07-01

    The article describes a pipeline, based on image-data, for the 3D reconstruction of building façades or architectural elements and the successive modeling using geometric primitives. The approach overcome some existing problems in modeling architectural elements and deliver efficient-in-size reality-based textured 3D models useful for metric applications. For the 3D reconstruction, an opensource pipeline developed within the TAPENADE project is employed. In the successive modeling steps, the user manually selects an area containing an architectural element (capital, column, bas-relief, window tympanum, etc.) and then the procedure fits geometric primitives and computes disparity and displacement maps in order to tie visual and geometric information together in a light but detailed 3D model. Examples are reported and commented.

  4. Intensity-Based Registration of Freehand 3D Ultrasound and CT-scan Images of the Kidney

    Leroy, Antoine; Payan, Yohan; Troccaz, Jocelyne

    2007-01-01

    This paper presents a method to register a pre-operative Computed-Tomography (CT) volume to a sparse set of intra-operative Ultra-Sound (US) slices. In the context of percutaneous renal puncture, the aim is to transfer planning information to an intra-operative coordinate system. The spatial position of the US slices is measured by optically localizing a calibrated probe. Assuming the reproducibility of kidney motion during breathing, and no deformation of the organ, the method consists in optimizing a rigid 6 Degree Of Freedom (DOF) transform by evaluating at each step the similarity between the set of US images and the CT volume. The correlation between CT and US images being naturally rather poor, the images have been preprocessed in order to increase their similarity. Among the similarity measures formerly studied in the context of medical image registration, Correlation Ratio (CR) turned out to be one of the most accurate and appropriate, particularly with the chosen non-derivative minimization scheme, n...

  5. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume; Dufait, Remi; Jensen, Jørgen Arendt

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. ...

  6. Preliminary examples of 3D vector flow imaging

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev;

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental...... ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal...... acquisition as opposed to magnetic resonance imaging (MRI). The results demonstrate that the 3D TO method is capable of performing 3D vector flow imaging....

  7. Optimizing an SEM-based 3D surface imaging technique for recording bond coat surface geometry in thermal barrier coatings

    Creation of three-dimensional representations of surfaces from images taken at two or more view angles is a well-established technique applied to optical images and is frequently used in combination with scanning electron microscopy (SEM). The present work describes specific steps taken to optimize and enhance the repeatability of three-dimensional surfaces reconstructed from SEM images. The presented steps result in an approximately tenfold improvement in the repeatability of the surface reconstruction compared to more standard techniques. The enhanced techniques presented can be used with any SEM friendly samples. In this work the modified technique was developed in order to accurately quantify surface geometry changes in metallic bond coats used with thermal barrier coatings (TBCs) to provide improved turbine hot part durability. Bond coat surfaces are quite rough, and accurate determination of surface geometry change (rumpling) requires excellent repeatability. Rumpling is an important contributor to TBC failure, and accurate quantification of rumpling is important to better understanding of the failure behavior of TBCs. (paper)

  8. Diffractive optical element for creating visual 3D images.

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  9. Ice shelf melt rates and 3D imaging

    Lewis, Cameron Scott

    Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf's grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3D image of an ice shelf. The 3D images can then be used to obtain a more complete estimate of the bottom melt rates of ice shelves.

  10. Effective classification of 3D image data using partitioning methods

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  11. Autonomous Planetary 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    A common task for many deep space missions is autonomous generation of 3-D representations of planetary surfaces onboard unmanned spacecrafts. The basic problem for this class of missions is, that the closed loop time is far too long. The closed loop time is defined as the time from when a human...... of seconds to a few minutes, the closed loop time effectively precludes active human control.The only way to circumvent this problem is to build an artificial feature extractor operating autonomously onboard the spacecraft.Different artificial feature extractors are presented and their efficiency...... is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  12. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  13. 3-D capacitance density imaging system

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  14. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  15. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    2010-01-01

    Roč. 6, č. 7 (2010), s. 617-620. ISSN 1931-7360 R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : reconstruction methods * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  16. Projector-Based Augmented Reality for Intuitive Intraoperative Guidance in Image-Guided 3D Interstitial Brachytherapy

    Purpose: The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. Methods and Materials: The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. Results: In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). Conclusions: The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation

  17. Acoustic 3D imaging of dental structures

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  18. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  19. A 3D Model Reconstruction Method Using Slice Images

    LI Hong-an; KANG Bao-sheng

    2013-01-01

    Aiming at achieving the high accuracy 3D model from slice images, a new model reconstruction method using slice im-ages is proposed. Wanting to extract the outermost contours from slice images, the method of the improved GVF-Snake model with optimized force field and ray method is employed. And then, the 3D model is reconstructed by contour connection using the im-proved shortest diagonal method and judgment function of contour fracture. The results show that the accuracy of reconstruction 3D model is improved.

  20. Fully automatic plaque segmentation in 3-D carotid ultrasound images.

    Cheng, Jieyu; Li, He; Xiao, Feng; Fenster, Aaron; Zhang, Xuming; He, Xiaoling; Li, Ling; Ding, Mingyue

    2013-12-01

    Automatic segmentation of the carotid plaques from ultrasound images has been shown to be an important task for monitoring progression and regression of carotid atherosclerosis. Considering the complex structure and heterogeneity of plaques, a fully automatic segmentation method based on media-adventitia and lumen-intima boundary priors is proposed. This method combines image intensity with structure information in both initialization and a level-set evolution process. Algorithm accuracy was examined on the common carotid artery part of 26 3-D carotid ultrasound images (34 plaques ranging in volume from 2.5 to 456 mm(3)) by comparing the results of our algorithm with manual segmentations of two experts. Evaluation results indicated that the algorithm yielded total plaque volume (TPV) differences of -5.3 ± 12.7 and -8.5 ± 13.8 mm(3) and absolute TPV differences of 9.9 ± 9.5 and 11.8 ± 11.1 mm(3). Moreover, high correlation coefficients in generating TPV (0.993 and 0.992) between algorithm results and both sets of manual results were obtained. The automatic method provides a reliable way to segment carotid plaque in 3-D ultrasound images and can be used in clinical practice to estimate plaque measurements for management of carotid atherosclerosis. PMID:24063959

  1. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    Tran, Thanh N; Nguyen, Thanh T; Willemsz, Tofan A; van Kessel, Gijs; Frijlink, Henderik W; van der Voort Maarschalk, Kees

    2012-01-01

    Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of

  2. SHINKEI - a novel 3D isotropic MR neurography technique: technical advantages over 3DIRTSE-based imaging

    Technical assessment of SHINKEI pulse sequence and conventional 3DIRTSE for LS plexus MR neurography. Twenty-one MR neurography examinations of the LS plexus were performed at 3 T, using 1.5-mm isotropic 3DIRTSE and SHINKEI sequences. Images were evaluated for motion and pulsation artefacts, nerve signal-to-noise ratio, contrast-to-noise ratio, nerve-to-fat ratio, muscle-to-fat ratio, fat suppression homogeneity and depiction of LS plexus branches. Paired Student t test was used to assess differences in nerve conspicuity (p < 0.05 was considered statistically significant). ICC correlation was obtained for intraobserver performance. Four examinations were excluded due to prior spine surgery. Bowel motion artefacts, pulsation artefacts, heterogeneous fat saturation and patient motion were seen in 16/17, 0/17, 17/17, 2/17 on 3DIRTSE and 0/17, 0/17, 0/17, 1/17 on SHINKEI. SHINKEI performed better (p < 0.01) for nerve signal-to-noise, contrast-to-noise, nerve-to-fat and muscle-to-fat ratios. 3DIRTSE and SHINKEI showed all LS plexus nerve roots, sciatic and femoral nerves. Smaller branches including obturator, lateral femoral cutaneous and iliohypogastric nerves were seen in 10/17, 5/17, 1/17 on 3DIRTSE and 17/17, 16/17, 7/17 on SHINKEI. Intraobserver reliability was excellent. SHINKEI MRN demonstrates homogeneous and superior fat suppression with increased nerve signal- and contrast-to-noise ratios resulting in better conspicuity of smaller LS plexus branches. (orig.)

  3. SHINKEI - a novel 3D isotropic MR neurography technique: technical advantages over 3DIRTSE-based imaging

    Kasper, Jared M.; Wadhwa, Vibhor; Xi, Yin [University of Texas Southwestern Medical Center, Musculoskeletal Radiology, Dallas, TX (United States); Scott, Kelly M. [University of Texas Southwestern Medical Center, Physical Medicine and Rehabilitation, Dallas, TX (United States); Rozen, Shai [University of Texas Southwestern Medical Center, Plastic Surgery, Dallas, TX (United States); Chhabra, Avneesh [University of Texas Southwestern Medical Center, Musculoskeletal Radiology, Dallas, TX (United States); Johns Hopkins University, Baltimore, MD (United States)

    2015-06-01

    Technical assessment of SHINKEI pulse sequence and conventional 3DIRTSE for LS plexus MR neurography. Twenty-one MR neurography examinations of the LS plexus were performed at 3 T, using 1.5-mm isotropic 3DIRTSE and SHINKEI sequences. Images were evaluated for motion and pulsation artefacts, nerve signal-to-noise ratio, contrast-to-noise ratio, nerve-to-fat ratio, muscle-to-fat ratio, fat suppression homogeneity and depiction of LS plexus branches. Paired Student t test was used to assess differences in nerve conspicuity (p < 0.05 was considered statistically significant). ICC correlation was obtained for intraobserver performance. Four examinations were excluded due to prior spine surgery. Bowel motion artefacts, pulsation artefacts, heterogeneous fat saturation and patient motion were seen in 16/17, 0/17, 17/17, 2/17 on 3DIRTSE and 0/17, 0/17, 0/17, 1/17 on SHINKEI. SHINKEI performed better (p < 0.01) for nerve signal-to-noise, contrast-to-noise, nerve-to-fat and muscle-to-fat ratios. 3DIRTSE and SHINKEI showed all LS plexus nerve roots, sciatic and femoral nerves. Smaller branches including obturator, lateral femoral cutaneous and iliohypogastric nerves were seen in 10/17, 5/17, 1/17 on 3DIRTSE and 17/17, 16/17, 7/17 on SHINKEI. Intraobserver reliability was excellent. SHINKEI MRN demonstrates homogeneous and superior fat suppression with increased nerve signal- and contrast-to-noise ratios resulting in better conspicuity of smaller LS plexus branches. (orig.)

  4. How does collaborative 3D screen-based computer simulation training influence diagnostic skills of radiographic images and peer communication?

    Söderström, Tor; Häll, Lars; Nilsson, Tore; Ahlqvist, Jan

    2012-01-01

    This study compares the influence of two learning conditions – a screen-based virtual reality radiology simulator and a conventional PowerPoint slide presentation – that teach radiographic interpretation to dental students working in small collaborative groups. The study focused on how the students communicated and how proficient they became at radiographic interpretation. The sample consisted of 36 participants – 20 women and 16 men – and used a pretest/posttest group design with the partici...

  5. Strain-rate sensitivity of foam materials: A numerical study using 3D image-based finite element model

    Sun Yongle; Li Q.M.; Withers P.J.

    2015-01-01

    Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band) of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi) may not be sufficiently representative to capture t...

  6. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  7. Software for 3D diagnostic image reconstruction and analysis

    Recent advances in computer technologies have opened new frontiers in medical diagnostics. Interesting possibilities are the use of three-dimensional (3D) imaging and the combination of images from different modalities. Software prepared in our laboratories devoted to 3D image reconstruction and analysis from computed tomography and ultrasonography is presented. In developing our software it was assumed that it should be applicable in standard medical practice, i.e. it should work effectively with a PC. An additional feature is the possibility of combining 3D images from different modalities. The reconstruction and data processing can be conducted using a standard PC, so low investment costs result in the introduction of advanced and useful diagnostic possibilities. The program was tested on a PC using DICOM data from computed tomography and TIFF files obtained from a 3D ultrasound system. The results of the anthropomorphic phantom and patient data were taken into consideration. A new approach was used to achieve spatial correlation of two independently obtained 3D images. The method relies on the use of four pairs of markers within the regions under consideration. The user selects the markers manually and the computer calculates the transformations necessary for coupling the images. The main software feature is the possibility of 3D image reconstruction from a series of two-dimensional (2D) images. The reconstructed 3D image can be: (1) viewed with the most popular methods of 3D image viewing, (2) filtered and processed to improve image quality, (3) analyzed quantitatively (geometrical measurements), and (4) coupled with another, independently acquired 3D image. The reconstructed and processed 3D image can be stored at every stage of image processing. The overall software performance was good considering the relatively low costs of the hardware used and the huge data sets processed. The program can be freely used and tested (source code and program available at

  8. 3D imaging of aortic aneurysma using spiral CT

    The use of 3D reconstructions (3D display technique and maximum intensity projection) in spiral CT for diagnostic evaluation of aortic aneurysma is explained. The data available showing 12 aneurysma of the abdominal and thoracic aorta (10 cases of aneurysma verum, 2 cases of aneurysma dissecans) were selected for verification of the value of 3D images in comparison to transversal displays of the CT. The 3D reconstructions of the spiral CT, other than the projection angiography, give insight into the vessel from various points of view. Such information is helpful for quickly gathering a picture of the volume and contours of a pathological process in the vessel. 3D post-processing of data is advisable if the comparison of tomograms and projection images produces findings of nuclear definition which need clarification prior to surgery. (orig.)

  9. SU-E-J-200: A Dosimetric Analysis of 3D Versus 4D Image-Based Dose Calculation for Stereotactic Body Radiation Therapy in Lung Tumors

    Ma, M; Rouabhi, O; Flynn, R; Xia, J [University of Iowa Hospitals and Clinics, Iowa City, IA (United States); Bayouth, J [University of Wisconsin, Madison, WI (United States)

    2014-06-01

    Purpose: To evaluate the dosimetric difference between 3D and 4Dweighted dose calculation using patient specific respiratory trace and deformable image registration for stereotactic body radiation therapy in lung tumors. Methods: Two dose calculation techniques, 3D and 4D-weighed dose calculation, were used for dosimetric comparison for 9 lung cancer patients. The magnitude of the tumor motion varied from 3 mm to 23 mm. Breath-hold exhale CT was used for 3D dose calculation with ITV generated from the motion observed from 4D-CT. For 4D-weighted calculation, dose of each binned CT image from the ten breathing amplitudes was first recomputed using the same planning parameters as those used in the 3D calculation. The dose distribution of each binned CT was mapped to the breath-hold CT using deformable image registration. The 4D-weighted dose was computed by summing the deformed doses with the temporal probabilities calculated from their corresponding respiratory traces. Dosimetric evaluation criteria includes lung V20, mean lung dose, and mean tumor dose. Results: Comparing with 3D calculation, lung V20, mean lung dose, and mean tumor dose using 4D-weighted dose calculation were changed by −0.67% ± 2.13%, −4.11% ± 6.94% (−0.36 Gy ± 0.87 Gy), −1.16% ± 1.36%(−0.73 Gy ± 0.85 Gy) accordingly. Conclusion: This work demonstrates that conventional 3D dose calculation method may overestimate the lung V20, MLD, and MTD. The absolute difference between 3D and 4D-weighted dose calculation in lung tumor may not be clinically significant. This research is supported by Siemens Medical Solutions USA, Inc and Iowa Center for Research By Undergraduates.

  10. Performance Evaluating of some Methods in 3D Depth Reconstruction from a Single Image

    Wen, Wei

    2009-01-01

    We studied the problem of 3D reconstruction from a single image. The 3D reconstruction is one of the basic problems in Computer Vision. The 3D reconstruction is usually achieved by using two or multiple images of a scene. However recent researches in Computer Vision field have enabled us to recover the 3D information even from only one single image. The methods used in such reconstructions are based on depth information, projection geometry, image content, human psychology and so on. Each met...

  11. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. PMID:27498015

  12. Development and evaluation of a semiautomatic 3D segmentation technique of the carotid arteries from 3D ultrasound images

    Gill, Jeremy D.; Ladak, Hanif M.; Steinman, David A.; Fenster, Aaron

    1999-05-01

    In this paper, we report on a semi-automatic approach to segmentation of carotid arteries from 3D ultrasound (US) images. Our method uses a deformable model which first is rapidly inflated to approximately find the boundary of the artery, then is further deformed using image-based forces to better localize the boundary. An operator is required to initialize the model by selecting a position in the 3D US image, which is within the carotid vessel. Since the choice of position is user-defined, and therefore arbitrary, there is an inherent variability in the position and shape of the final segmented boundary. We have assessed the performance of our segmentation method by examining the local variability in boundary shape as the initial selected position is varied in a freehand 3D US image of a human carotid bifurcation. Our results indicate that high variability in boundary position occurs in regions where either the segmented boundary is highly curved, or the 3D US image has poorly defined vessel edges.

  13. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV Digital Images

    Zarnowski Aleksander

    2015-12-01

    Full Text Available Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV. Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition.

  14. Diffusible iodine-based contrast-enhanced computed tomography (diceCT): an emerging tool for rapid, high-resolution, 3-D imaging of metazoan soft tissues.

    Gignac, Paul M; Kley, Nathan J; Clarke, Julia A; Colbert, Matthew W; Morhardt, Ashley C; Cerio, Donald; Cost, Ian N; Cox, Philip G; Daza, Juan D; Early, Catherine M; Echols, M Scott; Henkelman, R Mark; Herdina, A Nele; Holliday, Casey M; Li, Zhiheng; Mahlow, Kristin; Merchant, Samer; Müller, Johannes; Orsbon, Courtney P; Paluh, Daniel J; Thies, Monte L; Tsai, Henry P; Witmer, Lawrence M

    2016-06-01

    Morphologists have historically had to rely on destructive procedures to visualize the three-dimensional (3-D) anatomy of animals. More recently, however, non-destructive techniques have come to the forefront. These include X-ray computed tomography (CT), which has been used most commonly to examine the mineralized, hard-tissue anatomy of living and fossil metazoans. One relatively new and potentially transformative aspect of current CT-based research is the use of chemical agents to render visible, and differentiate between, soft-tissue structures in X-ray images. Specifically, iodine has emerged as one of the most widely used of these contrast agents among animal morphologists due to its ease of handling, cost effectiveness, and differential affinities for major types of soft tissues. The rapid adoption of iodine-based contrast agents has resulted in a proliferation of distinct specimen preparations and scanning parameter choices, as well as an increasing variety of imaging hardware and software preferences. Here we provide a critical review of the recent contributions to iodine-based, contrast-enhanced CT research to enable researchers just beginning to employ contrast enhancement to make sense of this complex new landscape of methodologies. We provide a detailed summary of recent case studies, assess factors that govern success at each step of the specimen storage, preparation, and imaging processes, and make recommendations for standardizing both techniques and reporting practices. Finally, we discuss potential cutting-edge applications of diffusible iodine-based contrast-enhanced computed tomography (diceCT) and the issues that must still be overcome to facilitate the broader adoption of diceCT going forward. PMID:26970556

  15. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  16. Preparing diagnostic 3D images for image registration with planning CT images

    Purpose: Pre-radiotherapy (pre-RT) tomographic images acquired for diagnostic purposes often contain important tumor and/or normal tissue information which is poorly defined or absent in planning CT images. Our two years of clinical experience has shown that computer-assisted 3D registration of pre-RT images with planning CT images often plays an indispensable role in accurate treatment volume definition. Often the only available format of the diagnostic images is film from which the original 3D digital data must be reconstructed. In addition, any digital data, whether reconstructed or not, must be put into a form suitable for incorporation into the treatment planning system. The purpose of this investigation was to identify all problems that must be overcome before this data is suitable for clinical use. Materials and Methods: In the past two years we have 3D-reconstructed 300 diagnostic images from film and digital sources. As a problem was discovered we built a software tool to correct it. In time we collected a large set of such tools and found that they must be applied in a specific order to achieve the correct reconstruction. Finally, a toolkit (ediScan) was built that made all these tools available in the proper manner via a pleasant yet efficient mouse-based user interface. Results: Problems we discovered included different magnifications, shifted display centers, non-parallel image planes, image planes not perpendicular to the long axis of the table-top (shearing), irregularly spaced scans, non contiguous scan volumes, multiple slices per film, different orientations for slice axes (e.g. left-right reversal), slices printed at window settings corresponding to tissues of interest for diagnostic purposes, and printing artifacts. We have learned that the specific steps to correct these problems, in order of application, are: Also, we found that fast feedback and large image capacity (at least 2000 x 2000 12-bit pixels) are essential for practical application

  17. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume;

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix...... phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique...... cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels...

  18. Mono- and multistatic polarimetric sparse aperture 3D SAR imaging

    DeGraaf, Stuart; Twigg, Charles; Phillips, Louis

    2008-04-01

    SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.

  19. Advanced 3-D Ultrasound Imaging.:3-D Synthetic Aperture Imaging and Row-column Addressing of 2-D Transducer Arrays

    Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-01-01

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinic...

  20. Image based cardiac acceleration map using statistical shape and 3D+t myocardial tracking models; in-vitro study on heart phantom

    Pashaei, Ali; Piella, Gemma; Planes, Xavier; Duchateau, Nicolas; de Caralt, Teresa M.; Sitges, Marta; Frangi, Alejandro F.

    2013-03-01

    It has been demonstrated that the acceleration signal has potential to monitor heart function and adaptively optimize Cardiac Resynchronization Therapy (CRT) systems. In this paper, we propose a non-invasive method for computing myocardial acceleration from 3D echocardiographic sequences. Displacement of the myocardium was estimated using a two-step approach: (1) 3D automatic segmentation of the myocardium at end-diastole using 3D Active Shape Models (ASM); (2) propagation of this segmentation along the sequence using non-rigid 3D+t image registration (temporal di eomorphic free-form-deformation, TDFFD). Acceleration was obtained locally at each point of the myocardium from local displacement. The framework has been tested on images from a realistic physical heart phantom (DHP-01, Shelley Medical Imaging Technologies, London, ON, CA) in which the displacement of some control regions was known. Good correlation has been demonstrated between the estimated displacement function from the algorithms and the phantom setup. Due to the limited temporal resolution, the acceleration signals are sparse and highly noisy. The study suggests a non-invasive technique to measure the cardiac acceleration that may be used to improve the monitoring of cardiac mechanics and optimization of CRT.

  1. A Visual Similarity-Based 3D Search Engine

    Lmaati, Elmustapha Ait; Oirrak, Ahmed El; M.N. Kaddioui

    2009-01-01

    Retrieval systems for 3D objects are required because 3D databases used around the web are growing. In this paper, we propose a visual similarity based search engine for 3D objects. The system is based on a new representation of 3D objects given by a 3D closed curve that captures all information about the surface of the 3D object. We propose a new 3D descriptor, which is a combination of three signatures of this new representation, and we implement it in our interactive web based search engin...

  2. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  3. A 3D high resolution ex vivo white matter atlas of the common squirrel monkey (saimiri sciureus) based on diffusion tensor imaging

    Gao, Yurui; Parvathaneni, Prasanna; Schilling, Kurt G.; Wang, Feng; Stepniewska, Iwona; Xu, Zhoubing; Choe, Ann S.; Ding, Zhaohua; Gore, John C.; Chen, Li min; Landman, Bennett A.; Anderson, Adam W.

    2016-03-01

    Modern magnetic resonance imaging (MRI) brain atlases are high quality 3-D volumes with specific structures labeled in the volume. Atlases are essential in providing a common space for interpretation of results across studies, for anatomical education, and providing quantitative image-based navigation. Extensive work has been devoted to atlas construction for humans, macaque, and several non-primate species (e.g., rat). One notable gap in the literature is the common squirrel monkey - for which the primary published atlases date from the 1960's. The common squirrel monkey has been used extensively as surrogate for humans in biomedical studies, given its anatomical neuro-system similarities and practical considerations. This work describes the continued development of a multi-modal MRI atlas for the common squirrel monkey, for which a structural imaging space and gray matter parcels have been previously constructed. This study adds white matter tracts to the atlas. The new atlas includes 49 white matter (WM) tracts, defined using diffusion tensor imaging (DTI) in three animals and combines these data to define the anatomical locations of these tracks in a standardized coordinate system compatible with previous development. An anatomist reviewed the resulting tracts and the inter-animal reproducibility (i.e., the Dice index of each WM parcel across animals in common space) was assessed. The Dice indices range from 0.05 to 0.80 due to differences of local registration quality and the variation of WM tract position across individuals. However, the combined WM labels from the 3 animals represent the general locations of WM parcels, adding basic connectivity information to the atlas.

  4. Weighted 3D GS Algorithm for Image-Qquality Improvement of Multi-Plane Holographic Display

    李芳; 毕勇; 王皓; 孙敏远; 孔新新

    2012-01-01

    Theoretically,three-dimensional (3D) GS algorithm can realize 3D displays; however,correlation of the output image is restricted because of the interaction among multiple planes,thus failing to meet the image-quality requirements in practical applications.We introduce the weight factors and propose the weighted 3D GS algorithm,which can realize selective control of the correlation of multi-plane display based on the traditional 3D GS algorithm.Improvement in image quality is accomplished by the selection of appropriate weight factors.

  5. 3D Image Display Courses for Information Media Students.

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  6. A near field 3D radar imaging technique

    Broquetas Ibars, Antoni

    1993-01-01

    The paper presents an algorithm which recovers a 3D reflectivity image of a target from near-field scattering measurements. Spherical wave nearfield illumination is used, in order to avoid a costly compact range installation to produce a plane wave illumination. The system is described and some simulated 3D reconstructions are included. The paper also presents a first experimental validation of this technique. Peer Reviewed

  7. Investigation of the feasability for 3D synthetic aperture imaging

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2003-01-01

    This paper investigates the feasibility of implementing real-time synthetic aperture 3D imaging on the experimental system developed at the Center for Fast Ultrasound Imaging using a 2D transducer array. The target array is a fully populated 32 × 32 3 MHz array with a half wavelength pitch. The...

  8. 3-D model-based tracking for UAV indoor localization.

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights. PMID:25099967

  9. Microassembly for complex and solid 3D MEMS by 3D Vision-based control.

    Tamadazte, Brahim; Le Fort-Piat, Nadine; Marchand, Eric; Dembélé, Sounkalo

    2009-01-01

    This paper describes the vision-based methods developed for assembly of complex and solid 3D MEMS (micro electromechanical systems) structures. The microassembly process is based on sequential robotic operations such as planar positioning, gripping, orientation in space and insertion tasks. Each of these microassembly tasks is performed using a posebased visual control. To be able to control the microassembly process, a 3D model-based tracker is used. This tracker able to directly provides th...

  10. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  11. [3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].

    Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu

    2015-08-01

    The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449

  12. Direct 3D Painting with a Metaball-Based Paintbrush

    WAN Huagen; JIN Xiaogang; BAO Hujun

    2000-01-01

    This paper presents a direct 3D painting algorithm for polygonal models in 3D object-space with a metaball-based paintbrush in virtual environment.The user is allowed to directly manipulate the parameters used to shade the surface of the 3D shape by applying the pigment to its surface with direct 3D manipulation through a 3D flying mouse.

  13. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  14. Development of a Hausdorff distance based 3D quantification technique to evaluate the CT imaging system impact on depiction of lesion morphology

    Sahbaee, Pooyan; Robins, Marthony; Solomon, Justin; Samei, Ehsan

    2016-04-01

    The purpose of this study was to develop a 3D quantification technique to assess the impact of imaging system on depiction of lesion morphology. Regional Hausdorff Distance (RHD) was computed from two 3D volumes: virtual mesh models of synthetic nodules or "virtual nodules" and CT images of physical nodules or "physical nodules". The method can be described in following steps. First, the synthetic nodule was inserted into anthropomorphic Kyoto thorax phantom and scanned in a Siemens scanner (Flash). Then, nodule was segmented from the image. Second, in order to match the orientation of the nodule, the digital models of the "virtual" and "physical" nodules were both geometrically translated to the origin. Then, the "physical" was gradually rotated at incremental 10 degrees. Third, the Hausdorff Distance was calculated from each pair of "virtual" and "physical" nodules. The minimum HD value represented the most matching pair. Finally, the 3D RHD map and the distribution of RHD were computed for the matched pair. The technique was scalarized using the FWHM of the RHD distribution. The analysis was conducted for various shapes (spherical, lobular, elliptical, and speculated) of nodules. The calculated FWHM values of RHD distribution for the 8-mm spherical, lobular, elliptical, and speculated "virtual" and "physical" nodules were 0.23, 0.42, 0.33, and 0.49, respectively.

  15. 3D imaging of semiconductor components by discrete laminography

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach

  16. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  17. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  18. 3D Tongue Motion from Tagged and Cine MR Images

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z.; Lee, Junghoon; Stone, Maureen; Prince, Jerry L.

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach su ers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information...

  19. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  20. The Design and Implementation of 3D Medical Image Reconstruction System Based on VTK and ITK%基于VTK和ITK的3D医学图像重建系统的设计与实现

    刘鹰; 韩利凯

    2011-01-01

    三维图像重构是当前数字图像处理领域的一个热点,特别是其在医学图像处理中的应用.VascuView3D是一个基于VTK和ITK的3D医学图像重建系统,该系统实现了体绘制(VR)、表面绘制(SR)和多平面绘制(MPR)等3D视图,以及基于CLUT的三维灰度图像着色.%3D image reconstruction is an attractive Held generally in digital image processing techniques, especially in medical imaging. The design and implementation of a 3D medical image reconstruction system VascuView, which can be used to build 3D images from 2D image slice files produced by CT and MRI devices, is introduced. The volume rendering, surface rendering and Multi -Planar rendering are implemented and lots of the 3D operations such as coloring of 3D image based on CLUT can be performed with this software.

  1. Image-Based 3d Modeling VS Laser Scanning for the Analysis of Medieval Architecture: the Case of ST. Croce Church in Bergamo

    Cardaci, A.; Versaci, A.

    2013-07-01

    The Church of St. Croce in Bergamo (second half of the 11th century), is a small four-sided building consisting of two overlapping volumes located in the courtyard adjacent to the Bishop's Palace. In the last years, archaeological excavations have unearthed parts of the edifice, until that time hidden because buried during the construction of the Basilica of Santa Maria Maggiore and now restored its original form. Due to the recent discoveries, a critical review of all the existing documentation in order to clarify the relationship of the various building components has been considered necessary. A quick, well-timed, chromatically characterized and accurate survey aimed at the complete digital reconstruction of this interesting example of medieval Italian architecture was then needed. This has suggested simultaneously testing two of the most innovative technologies: the 3D laser scanning survey ensuring high-resolution and complete models within a short time, and the photogrammetric automatic image-based modelling, allowing a three-dimensional reconstruction of the architectural objects. This paper intends to show the results achieved by the analytical comparison between the two methodologies, and thus analyse their differences, the advantages and the deficiencies of both of them and the opportunities for future enhancements and developments.

  2. Automatic extraction of abnormal signals from diffusion-weighted images using 3D-ACTIT

    Recent developments in medical imaging equipment have made it possible to acquire large amounts of image data and to perform detailed diagnosis. However, it is difficult for physicians to evaluate all of the image data obtained. To address this problem, computer-aided detection (CAD) and expert systems have been investigated. In these investigations, as the types of images used for diagnosis has expanded, the requirements for image processing have become more complex. We therefore propose a new method which we call Automatic Construction of Tree-structural Image Transformation (3D-ACTIT) to perform various 3D image processing procedures automatically using instance-based learning. We have conducted research on diffusion-weighted image (DWI) data and its processing. In this report, we describe how 3D-ACTIT performs processing to extract only abnormal signal regions from 3D-DWI data. (author)

  3. Improvement of integral 3D image quality by compensating for lens position errors

    Okui, Makoto; Arai, Jun; Kobayashi, Masaki; Okano, Fumio

    2004-05-01

    Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device with about 18,000 lenses using a super-high-definition camera with 2,000 scanning lines. Positional errors of these high-precision lenses as well as the camera's lenses will cause distortions in the elemental image, which directly affect the quality of the 3-D image and the viewing area. We have devised a way to compensate for such geometrical position errors and used it for the integral 3-D TV prototype, resulting in an improvement in both viewing zone and picture quality.

  4. Velocity Measurement in Carotid Artery: Quantitative Comparison of Time-Resolved 3D Phase-Contrast MRI and Image-based Computational Fluid Dynamics

    Sarrami-Foroushani

    2015-10-01

    Full Text Available Background Understanding hemodynamic environment in vessels is important for realizing the mechanisms leading to vascular pathologies. Objectives Three-dimensional velocity vector field in carotid bifurcation is visualized using TR 3D phase-contrast magnetic resonance imaging (TR 3D PC MRI and computational fluid dynamics (CFD. This study aimed to present a qualitative and quantitative comparison of the velocity vector field obtained by each technique. Subjects and Methods MR imaging was performed on a 30-year old male normal subject. TR 3D PC MRI was performed on a 3 T scanner to measure velocity in carotid bifurcation. 3D anatomical model for CFD was created using images obtained from time-of-flight MR angiography. Velocity vector field in carotid bifurcation was predicted using CFD and PC MRI techniques. A statistical analysis was performed to assess the agreement between the two methods. Results Although the main flow patterns were the same for the both techniques, CFD showed a greater resolution in mapping the secondary and circulating flows. Overall root mean square (RMS errors for all the corresponding data points in PC MRI and CFD were 14.27% in peak systole and 12.91% in end diastole relative to maximum velocity measured at each cardiac phase. Bland-Altman plots showed a very good agreement between the two techniques. However, this study was not aimed to validate any of methods, instead, the consistency was assessed to accentuate the similarities and differences between Time-resolved PC MRI and CFD. Conclusion Both techniques provided quantitatively consistent results of in vivo velocity vector fields in right internal carotid artery (RCA. PC MRI represented a good estimation of main flow patterns inside the vasculature, which seems to be acceptable for clinical use. However, limitations of each technique should be considered while interpreting results.

  5. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  6. Robust model-based 3d/3D fusion using sparse matching for minimally invasive surgery.

    Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan

    2013-01-01

    Classical surgery is being disrupted by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm CT and C-arm fluoroscopy are routinely used for intra-operative guidance. However, intra-operative modalities have limited image quality of the soft tissue and a reliable assessment of the cardiac anatomy can only be made by injecting contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a novel sparse matching approach for fusing high quality pre-operative CT and non-contrasted, non-gated intra-operative C-arm CT by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the pre-operative CT and mapped to the intra-operative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments demonstrate that our model-based fusion approach has an average execution time of 2.9 s, while the accuracy lies within expert user confidence intervals. PMID:24505663

  7. Impact of 3D image-based PDR brachytherapy on outcome of patients treated for cervix carcinoma in France: Results of the French STIC prospective study

    Purpose: In 2005 a French multicentric non randomized prospective study was initiated to compare two groups of patients treated for cervix carcinoma according to brachytherapy (BT) method: 2D vs 3D dosimetry. The BT dosimetric planning method was chosen for each patient in each center according to the availability of the technique. This study describes the results for 705 out of 801 patients available for analysis. Patients and methods: For the 2D arm, dosimetry was planned on orthogonal X-Rays using low dose rate (LDR) or pulsed dose rate (PDR) BT. For the 3D arm, dosimetry was planned on 3D imaging (mainly CT) and performed with PDR BT. Each center could follow the dosimetric method they were used to, according to the chosen radioelement and applicator. Manual or graphical optimization was allowed. Three treatment regimens were defined: Group 1: BT followed by surgery; 165 patients (2D arm: 76; 3D arm: 89); Group 2: EBRT (+chemotherapy), BT, then surgery; 305 patients (2D arm: 142; 3D arm: 163); Group 3: EBRT (+chemotherapy), then BT; 235 patients, (2D arm: 118; 3D arm: 117). The DVH parameters for CTVs (High Risk CTV and Intermediate Risk CTV) and organs at risk (OARs) were computed as recommended by GYN GEC ESTRO guidelines. Total doses were converted to equivalent doses in 2 Gy fractions (EQD2). Side effects were prospectively assessed using the CTCAEv3.0. Results: The 2D and 3D arms were well balanced with regard to age, FIGO stage, histology, EBRT dose and chemotherapy. For each treatment regimen, BT doses and volumes were comparable between the 2D and 3D arms in terms of dose to point A, isodose 60 Gy volume, dose to ICRU rectal points, and TRAK. Dosimetric data in the 3D arm showed that the dose delivered to 90% of the High Risk CTV (HR CTV D90) was respectively, 81.2 Gyα/β10, 63.2 Gyα/β10 and 73.1 Gyα/β10 for groups 1, 2 and 3. The Intermediate Risk (IR) CTV D90 was respectively, 58.5 Gyα/β10, 57.3 Gyα/β10 and 61.7 Gyα/β10 for groups 1, 2 and

  8. Spectral ladar: towards active 3D multispectral imaging

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  9. STAR3D: a stack-based RNA 3D structural alignment tool.

    Ge, Ping; Zhang, Shaojie

    2015-11-16

    The various roles of versatile non-coding RNAs typically require the attainment of complex high-order structures. Therefore, comparing the 3D structures of RNA molecules can yield in-depth understanding of their functional conservation and evolutionary history. Recently, many powerful tools have been developed to align RNA 3D structures. Although some methods rely on both backbone conformations and base pairing interactions, none of them consider the entire hierarchical formation of the RNA secondary structure. One of the major issues is that directly applying the algorithms of matching 2D structures to the 3D coordinates is particularly time-consuming. In this article, we propose a novel RNA 3D structural alignment tool, STAR3D, to take into full account the 2D relations between stacks without the complicated comparison of secondary structures. First, the 3D conserved stacks in the inputs are identified and then combined into a tree-like consensus. Afterward, the loop regions are compared one-to-one in accordance with their relative positions in the consensus tree. The experimental results show that the prediction of STAR3D is more accurate for both non-homologous and homologous RNAs than other state-of-the-art tools with shorter running time. PMID:26184875

  10. Virtual reality 3D headset based on DMD light modulators

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  11. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  12. 3D interfractional patient position verification using 2D-3D registration of orthogonal images

    Reproducible positioning of the patient during fractionated external beam radiation therapy is imperative to ensure that the delivered dose distribution matches the planned one. In this paper, we expand on a 2D-3D image registration method to verify a patient's setup in three dimensions (rotations and translations) using orthogonal portal images and megavoltage digitally reconstructed radiographs (MDRRs) derived from CT data. The accuracy of 2D-3D registration was improved by employing additional image preprocessing steps and a parabolic fit to interpolate the parameter space of the cost function utilized for registration. Using a humanoid phantom, precision for registration of three-dimensional translations was found to be better than 0.5 mm (1 s.d.) for any axis when no rotations were present. Three-dimensional rotations about any axis were registered with a precision of better than 0.2 deg. (1 s.d.) when no translations were present. Combined rotations and translations of up to 4 deg. and 15 mm were registered with 0.4 deg. and 0.7 mm accuracy for each axis. The influence of setup translations on registration of rotations and vice versa was also investigated and mostly agrees with a simple geometric model. Additionally, the dependence of registration accuracy on three cost functions, angular spacing between MDRRs, pixel size, and field-of-view, was examined. Best results were achieved by mutual information using 0.5 deg. angular spacing and a 10x10 cm2 field-of-view with 140x140 pixels. Approximating patient motion as rigid transformation, the registration method is applied to two treatment plans and the patients' setup errors are determined. Their magnitude was found to be ≤6.1 mm and ≤2.7 deg. for any axis in all of the six fractions measured for each treatment plan

  13. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment

  14. DICOM for quantitative imaging research in 3D Slicer

    Fedorov, Andrey; Kikinis, Ron

    2014-01-01

    These are the slides presented by Andrey Fedorov at the 3D Slicer workshop and meeting of the Quantitative Image Informatics for Cancer Research (QIICR) project that took place November 18-19, 2014, at the University of Iowa.

  15. 3D volume and SUV analysis of oncological PET studies. A voxel-based image processing tool with NSCLC as example

    Krohn, T.; Kaiser, H.J.; Boy, C.; Schaefer, W.M.; Buell, U.; Zimny, M. [Universitaetsklinikum Aachen (Germany). Klinik fuer Nuklearmedizin; Gagel, B. [Universitaetsklinikum Aachen (Germany). Klinik fuer Strahlentherapie

    2007-07-01

    Aim: The standardized uptake value (SUV) of {sup 18}FDG-PET is an important parameter for therapy monitoring and prognosis of malignant lesions. SUV determination requires delineating the respective volume of interest against surrounding tissue. The present study proposes an automatic image segmentation algorithm for lesion volume and FDG uptake quantitation. Methods: A region growing-based algorithm was developed, which goes through the following steps: 1. Definition of a starting point by the user. 2. Automatic determination of maximum uptake within the lesion. 3. Calculating a threshold value as percentage of maximum. 4. Automatic 3D lesion segmentation. 5. Quantitation of lesion volume and SUV. The procedure was developed using CTI CAPP and ECAT 7.2 software. Validation was done by phatom studies (Jaszczak phantom, various ''lesion'' sizes and contrasts) and on studies of NSCLC patients, who underwent clinical CT and FDG-PET scanning. Results: Phantom studies demonstrated a mean error of 3.5% for volume quantification using a threshold of 41% for contrast ratios {>=}5:1 and sphere volumes >5 ml. Comparison between CT- and PET-based volumetry showed a high correlation of both methods (r=0.98) for lesions with homogeneous FDG uptake. Radioactivity concentrations were underestimated by on average -41%. Employing an empirical threshold of 50% for SUV determination, the underestimation decreased to on average -34%. Conclusions: The algorithm facilitates an easy and reproducible SUV quantification and volume assessment of PET lesions in clinical practice. It was validated using NSCLC patient data and should also be applicable to other tumour entities. (orig.)

  16. Extracting 3D Layout From a Single Image Using Global Image Structures

    Z. Lou; T. Gevers; N. Hu

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  17. Holoscopic 3D image depth estimation and segmentation techniques

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  18. Efficient reconfigurable architectures for 3D medical image compression

    Afandi, Ahmad

    2010-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In thes...

  19. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  20. A framework for human spine imaging using a freehand 3D ultrasound system

    Purnama, Ketut E.; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; Ooijen, van Peter M.A.; Lubbers, Jaap; Burgerhof, Johannes G.M.; Sardjono, Tri A.; Verkerke, Gijsbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  1. High performance volume-of-intersection projectors for 3D-PET image reconstruction based on polar symmetries and SIMD vectorisation

    Scheins, J. J.; Vahedipour, K.; Pietrzyk, U.; Shah, N. J.

    2015-12-01

    For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation

  2. High performance volume-of-intersection projectors for 3D-PET image reconstruction based on polar symmetries and SIMD vectorisation

    For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations.Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively.In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation

  3. 3-D Imaging Systems for Agricultural Applications—A Review

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  4. 3-D Imaging Systems for Agricultural Applications-A Review.

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  5. 3-D Imaging Systems for Agricultural Applications—A Review

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  6. 1024 pixels single photon imaging array for 3D ranging

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  7. Helical CT scanner - 3D imaging and CT fluoroscopy

    It has been over twenty years since the introduction of X-ray CT. In recent years, the topic of helical scanning has dominated the area of technical development. With helical scanning now being used routinely, the traditional concept of the X-ray CT as a device for obtaining axial images of the body in slices has given way to that of one for obtaining images in volumes. For instance, the ability of helical scanning to acquire sequential images in the direction of the body axis makes it ideal for creating three dimensional (3-D) images, and has in fact led to the use of 3-D images in clinical practice. In addition, with helical scanning, imaging of organs such as the liver or lung can be performed in several tens of seconds, as opposed to a few minutes that it used to take. This has resulted not only in reduced time for the patient to spend under constraint for imaging but also to changes in diagnostic methods. The question, 'Would it be possible to perform reconstruction while scanning and to see resulting images in real time ?' is another issue which has been taken up, and it has been answered by CT Fluoroscopy. It makes it possible to see CT images in real time during sequential scanning, and from this development, applications such as CT-guided biopsy and CT-navigated surgery has been investigated and have been realized. Other possibilities to create a whole new series of diagnostic methods and results. (author)

  8. Random-Profiles-Based 3D Face Recognition System

    Joongrock Kim; Sunjin Yu; Sangyoun Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the perf...

  9. Multithreaded real-time 3D image processing software architecture and implementation

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  10. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Ruisong Ye; Wenping Yu

    2012-01-01

    An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency d...

  11. Individualized directional microphone optimization in hearing aids based on reconstructing the 3D geometry of the head and ear from 2D images

    Harder, Stine

    The goal of this thesis is to improve intelligibility for hearing-aid users by individualizing the directional microphone in a hearing aid. The general idea is a three step pipeline for easy acquisition of individually optimized directional filters. The first step is to estimate an individual 3D...... aid. We verify the directional filters optimized from simulated HRTFs based on a listener-specific head model against two set of optimal filters. The first set of optimal filters is calculated from HRTFs measured on a 3D printed version of the head model. The second set of optimal filters is...... 3:6 dB between an average filter and an optimal filter. This suggests that hearing-aid users with ITE hearing aids could benefit more from having individualized directional filters than what was shown for a BTE hearing aid. This thesis is a step towards individualizing the directional microphone in...

  12. 3D Image Reconstruction from Compton camera data

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  13. 3D CT Imaging Method for Measuring Temporal Bone Aeration

    Objective: 3D volume reconstruction of CT images can be used to measure temporal bene aeration. This study evaluates the technique with respect to reproducibility and acquisition parameters. Material and methods: Helical CT images acquired from patients with radiographically normal temporal bones using standard clinical protocols were retrospectively analyzed. 3D image reconstruction was performed to measure the volume of air within the temporal bone. The appropriate threshold values for air were determined from reconstruction of a phantom with a known air volume imaged using the same clinical protocols. The appropriate air threshold values were applied to the clinical material. Results: Air volume was measured according to an acquisition algorithm. The average volume in the temporal bone CT group was 5.56 ml, compared to 5.19 ml in the head CT group (p = 0.59). The correlation coefficient between examiners was > 0.92. There was a wide range of aeration volumes among individual ears (0.76-18.84 ml); however, paired temporal bones differed by an average of just 1.11 ml. Conclusions: The method of volume measurement from 3D reconstruction reported here is widely available, easy to perform and produces consistent results among examiners. Application of the technique to archival CT data is possible using corrections for air segmentation thresholds according to acquisition parameters

  14. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  15. A New Approach for 3D Range Image Segmentation using Gradient Method

    Dina A. Hafiz

    2011-01-01

    Full Text Available Problem statement: Segmentation of 3D range images is widely used in computer vision as an essential pre-processing step before the methods of high-level vision can be applied. Segmentation aims to study and recognize the features of range image such as 3D edges, connected surfaces and smooth regions. Approach: This study presents new improvements in segmentation of terrestrial 3D range images based on edge detection technique. The main idea is to apply a gradient edge detector in three different directions of the 3D range images. This 3D gradient detector is a generalization of the classical sobel operator used with 2D images, which is based on the differences of normal vectors or geometric locations in the coordinate directions. The proposed algorithm uses a 3D-grid structure method to handle large amount of unordered sets of points and determine neighborhood points. It segments the 3D range images directly using gradient edge detectors without any further computations like mesh generation. Our algorithm focuses on extracting important linear structures such as doors, stairs and windows from terrestrial 3D range images these structures are common in indoors and outdoors in many environments. Results: Experimental results showed that the proposed algorithm provides a new approach of 3D range image segmentation with the characteristics of low computational complexity and less sensitivity to noise. The algorithm is validated using seven artificially generated datasets and two real world datasets. Conclusion/Recommendations: Experimental results showed that different segmentation accuracy is achieved by using higher Grid resolution and adaptive threshold.

  16. 3D Imaging of a Cavity Vacuum under Dissipation

    Lee, Moonjoo; Seo, Wontaek; Hong, Hyun-Gue; Song, Younghoon; Dasari, Ramachandra R; An, Kyungwon

    2013-01-01

    P. A. M. Dirac first introduced zero-point electromagnetic fields in order to explain the origin of atomic spontaneous emission. Since then, it has long been debated how the zero-point vacuum field is affected by dissipation. Here we report 3D imaging of vacuum fluctuations in a high-Q cavity and rms amplitude measurements of the vacuum field. The 3D imaging was done by the position-dependent emission of single atoms, resulting in dissipation-free rms amplitude of 0.97 +- 0.03 V/cm. The actual rms amplitude of the vacuum field at the antinode was independently determined from the onset of single-atom lasing at 0.86 +- 0.08 V/cm. Within our experimental accuracy and precision, the difference was noticeable, but it is not significant enough to disprove zero-point energy conservation.

  17. Automated Recognition of 3D Features in GPIR Images

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  18. Improvements in quality and quantification of 3D PET images

    Rapisarda,

    2012-01-01

    The spatial resolution of Positron Emission Tomography is conditioned by several physical factors, which can be taken into account by using a global Point Spread Function (PSF). In this thesis a spatially variant (radially asymmetric) PSF implementation in the image space of a 3D Ordered Subsets Expectation Maximization (OSEM) algorithm is proposed. Two different scanners were considered, without and with Time Of Flight (TOF) capability. The PSF was derived by fitting some experimental...

  19. Super pipe lining system for 3-D CT imaging

    A new idea for 3-D CT image reconstruction system is introduced. For the network has very important improvement in recently years, it realizes that network computing replace the traditional serial system processing. CT system's works are carried in a multi-level fashion, it will make the tedious works processed by many computers linked by local network in the same time. So greatly improve the reconstruction speed

  20. 3D VSP imaging in the Deepwater GOM

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  1. Random-Profiles-Based 3D Face Recognition System

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  2. MRI Volume Fusion Based on 3D Shearlet Decompositions.

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  3. Discrete Method of Images for 3D Radio Propagation Modeling

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  4. 3D reconstruction of multiple stained histology images

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  5. 3D tongue motion from tagged and cine MR images.

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  6. Robust Adaptive Segmentation of 3D Medical Images with Level Sets

    Baillard, Caroline; Barillot, Christian; Bouthemy, Patrick

    2000-01-01

    This paper is concerned with the use of the Level Set formalism to segment anatomical structures in 3D medical images (ultrasound or magnetic resonance images). A closed 3D surface propagates towards the desired boundaries through the iterative evolution of a 4D implicit function. The major contribution of this work is the design of a robust evolution model based on adaptive parameters depending on the data. First the step size and the external propagation force factor, both usually predeterm...

  7. Quantitative Analysis of Porosity and Transport Properties by FIB-SEM 3D Imaging of a Solder Based Sintered Silver for a New Microelectronic Component

    Rmili, W.; Vivet, N.; Chupin, S.; Le Bihan, T.; Le Quilliec, G.; Richard, C.

    2016-04-01

    As part of development of a new assembly technology to achieve bonding for an innovative silicon carbide (SiC) power device used in harsh environments, the aim of this study is to compare two silver sintering profiles and then to define the best candidate for die attach material for this new component. To achieve this goal, the solder joints have been characterized in terms of porosity by determination of the morphological characteristics of the material heterogeneities and estimating their thermal and electrical transport properties. The three dimensional (3D) microstructure of sintered silver samples has been reconstructed using a focused ion beam scanning electron microscope (FIB-SEM) tomography technique. The sample preparation and the experimental milling and imaging parameters have been optimized in order to obtain a high quality of 3D reconstruction. Volume fractions and volumetric connectivity of the individual phases (silver and voids) have been determined. Effective thermal and electrical conductivities of the samples and the tortuosity of the silver phase have been also evaluated by solving the diffusive transport equation.

  8. High-Resolution Imaged-Based 3D Reconstruction Combined with X-Ray CT Data Enables Comprehensive Non-Destructive Documentation and Targeted Research of Astromaterials

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2014-01-01

    Providing web-based data of complex and sensitive astromaterials (including meteorites and lunar samples) in novel formats enhances existing preliminary examination data on these samples and supports targeted sample requests and analyses. We have developed and tested a rigorous protocol for collecting highly detailed imagery of meteorites and complex lunar samples in non-contaminating environments. These data are reduced to create interactive 3D models of the samples. We intend to provide these data as they are acquired on NASA's Astromaterials Acquisition and Curation website at http://curator.jsc.nasa.gov/.

  9. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  10. Fast vision-based catheter 3D reconstruction

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  11. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  12. Towards magnetic 3D x-ray imaging

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  13. 3D reconstruction based on spatial vanishing information

    Yuan Shu; Zheng Tan

    2005-01-01

    An approach for the three-dimensional (3D) reconstruction of architectural scenes from two un-calibrated images is described in this paper. From two views of one architectural structure, three pairs of corresponding vanishing points of three major mutual orthogonal directions can be extracted. The simple but powerful constraints of parallelism and orthogonal lines in architectural scenes can be used to calibrate the cameras and to recover the 3D information of the structure. This approach is applied to the real images of architectural scenes, and a 3D model of a building in virtual reality modelling language (VRML) format is presented which illustrates the method with successful performance.

  14. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  15. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated

  16. Large Scale 3D Image Reconstruction in Optical Interferometry

    Schutz, Antony; Mary, David; Thiébaut, Eric; Soulez, Ferréol

    2015-01-01

    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

  17. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  18. A high-level 3D visualization API for Java and ImageJ

    Longair Mark

    2010-05-01

    Full Text Available Abstract Background Current imaging methods such as Magnetic Resonance Imaging (MRI, Confocal microscopy, Electron Microscopy (EM or Selective Plane Illumination Microscopy (SPIM yield three-dimensional (3D data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  19. From 2D slices to 3D volumes: Image based reconstruction and morphological characterization of hippocampal cells on charged and uncharged surfaces using FIB/SEM serial sectioning

    3D imaging at a subcellular resolution is a powerful tool in the life sciences to investigate cells and their interactions with native tissues or artificial objects. While a tomographic experimental setup achieving a sufficient structural resolution can be established with either X-rays or electrons, the use of electrons is usually limited to very thin samples in transmission electron microscopy due to the poor penetration depths of electrons. The combination of a serial sectioning approach and scanning electron microscopy in state of the art dual beam experimental setups therefore offers a means to image highly resolved spatial details using a focused ion beam for slicing and an electron beam for imaging. The advantage of this technique over X-ray μCT or X-ray microscopy attributes to the fact that absorption is not a limiting factor in imaging and therefore even strong absorbing structures can be spatially reconstructed with a much higher possible resolution. This approach was used in this study to elucidate the effect of an electric potential on the morphology of cells from a hippocampal cell line (HT22) deposited on gold microelectrodes. While cells cultivated on two different controls (gold and polymer substrates) did show the expected stretched morphology, cells on both the anode and the cathode differed significantly. Cells deposited on the anode part of the electrode exhibited the most extreme deviation, being almost spherical and showed signs of chromatin condensation possibly indicating cell death. Furthermore, EDX was used as supplemental methodology for combined chemical and structural analyses. -- Research highlights: → FIB/SEM is utilized as a tool to investigate morphological changes in cells. → Tomography of individual cells was achieved by a sequential slice and image approach. → Different detectors were reviewed for their applicability on biological material. → The influence of an electrical potential on neuronal cells was investigated.

  20. 3D-imaging using micro-PIXE

    Ishii, K.; Matsuyama, S.; Watanabe, Y.; Kawamura, Y.; Yamaguchi, T.; Oyama, R.; Momose, G.; Ishizaki, A.; Yamazaki, H.; Kikuchi, Y.

    2007-02-01

    We have developed a 3D-imaging system using characteristic X-rays produced by proton micro-beam bombardment. The 3D-imaging system consists of a micro-beam and an X-ray CCD camera of 1 mega pixels (Hamamatsu photonics C8800X), and has a spatial resolution of 4 μm by using characteristic Ti-K-X-rays (4.558 keV) produced by 3 MeV protons of beam spot size of ˜1 μm. We applied this system, namely, a micron-CT to observe the inside of a living small ant's head of ˜1 mm diameter. An ant was inserted into a small polyimide tube the inside diameter and the wall thickness of which are 1000 and 25 μm, respectively, and scanned by the micron-CT. Three dimensional images of the ant's heads were obtained with a spatial resolution of 4 μm. It was found that, in accordance with the strong dependence on atomic number of photo ionization cross-sections, the mandibular gland of ant contains heavier elements, and moreover, the CT-image of living ant anaesthetized by chloroform is quite different from that of a dead ant dipped in formalin.

  1. Lymph node imaging by ultrarapid 3D angiography

    Purpose: A report on observations of lymph node images obtained by gadolinium-enhanced 3D MR angiography (MRA). Methods: Ultrarapid MRA (TR, TE, FA - 5 or 6.4 ms, 1.9 or 2.8 ms, 30-40 degrees) with 0.2 mmol/kg BW Gd-DTPA and 20 ml physiological saline. Start after completion of injection. Single series of the pelvis-thigh as well as head-neck regions by use of a phased array coil with a 1.5 T Magnetom Vision or a 1.0 T Magnetom Harmony (Siemens, Erlangen). We report on lymph node imaging in 4 patients, 2 of whom exhibited benign changes and 2 further metastases. In 1 patient with extensive lymph node metastases of a malignant melanoma, color-Doppler sonography as color-flow angiography (CFA) was used as a comparative method. Results: Lymph node imaging by contrast medium-enhanced ultrarapid 3D MRA apparently resulted from their vessels. Thus, arterially-supplied metastases and inflammatory enlarged lymph nodes were well visualized while those with a.v. shunts or poor vascular supply in tumor necroses were poorly imaged. Conclusions: Further investigations are required with regard to the visualization of lymph nodes in other parts of the body as well as a possible differentiation between benign and malignant lesions. (orig.)

  2. Inclined nanoimprinting lithography-based 3D nanofabrication

    We report a 'top–down' 3D nanofabrication approach combining non-conventional inclined nanoimprint lithography (INIL) with reactive ion etching (RIE), contact molding and 3D metal nanotransfer printing (nTP). This integration of processes enables the production and conformal transfer of 3D polymer nanostructures of varying heights to a variety of other materials including a silicon-based substrate, a silicone stamp and a metal gold (Au) thin film. The process demonstrates the potential of reduced fabrication cost and complexity compared to existing methods. Various 3D nanostructures in technologically useful materials have been fabricated, including symmetric and asymmetric nanolines, nanocircles and nanosquares. Such 3D nanostructures have potential applications such as angle-resolved photonic crystals, plasmonic crystals and biomimicking anisotropic surfaces. This integrated INIL-based strategy shows great promise for 3D nanofabrication in the fields of photonics, plasmonics and surface tribology

  3. Tablet-Based Interaction for Immersive 3D Data Exploration

    Lopez, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias

    2014-01-01

    Our overall vision is to enable researchers to explore 3D datasets with as much immersion as possible, arising both from visuals as well as from interaction . We therefore explore ways to combine an immersive large view of the 3D data with means to intuitively control this view with touch input on a separate mobile monoscopic tablet. This combination has the potential to increase people's acceptance of stereoscopic environments for 3D data visualization since--through touch-based interaction-...

  4. Statistical skull models from 3D X-ray images

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  5. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  6. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenge...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....... arise when using data from multiple frequencies for imaging of biological targets. In this paper, the performance of a multi-frequency algorithm, in which measurement data from several different frequencies are used at once, is compared with a stepped-frequency algorithm, in which images reconstructed...

  7. Development of 3D microwave imaging reflectometry in LHD (invited).

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  8. The application of camera calibration in range-gated 3D imaging technology

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  9. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-01-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localiza...

  10. 3D imaging of neutron tracks using confocal microscopy

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  11. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  12. Development of 2D, pseudo 3D and 3D x-ray imaging for early diagnosis of breast cancer and rheumatoid arthritis

    By using plane-wave x-rays with synchrotron radiation refraction-based x-ray medical imaging can be used to visualize soft tissue, as reported in this paper. This method comprises two-dimensional (2D) x-ray dark-field imaging (XDFI), the tomosynthesis of pseudo 3D (sliced) x-ray imaging by the adoption of XDFI and 3D x-ray imaging by utilizing a newly devised algorithm. We aim to make contribution to the early diagnosis of breast cancer, which is a major cancer among women, and rheumatoid arthritises which cannot be detected in its early stages. (author)

  13. 3D multiple-point statistics simulation using 2D training images

    Comunian, A.; Renard, P.; Straubhaar, J.

    2012-03-01

    One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.

  14. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Ruisong Ye

    2012-07-01

    Full Text Available An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency domain. Experimental results show that the stego-image looks visually identical to the original host one and the secret image can be effectively extracted upon image processing attacks, which demonstrates strong robustness against a variety of attacks.

  15. Computer-based image analysis in radiological diagnostics and image-guided therapy 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    Beier, J

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the softw...

  16. Learning Methods for Recovering 3D Human Pose from Monocular Images

    Agarwal, Ankur; Triggs, Bill

    2004-01-01

    We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We ev...

  17. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  18. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald, E-mail: theobold.fuchs@iis.fraunhofer.de; Schön, Tobias, E-mail: theobold.fuchs@iis.fraunhofer.de; Sukowski, Frank [Fraunhofer Development Center X-ray Technology EZRT, Flugplatzstr. 75, 90768 Fürth (Germany); Dittmann, Jonas; Hanke, Randolf [Chair of X-ray Microscopy, Institute of Physics and Astronomy, Julius-Maximilian-University Würzburg, Josef-Martin-Weg 63, 97074 Würzburg (Germany)

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  19. 3D Reconstruction of virtual colon structures from colonoscopy images.

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  20. 3D electrical tomographic imaging using vertical arrays of electrodes

    Murphy, S. C.; Stanley, S. J.; Rhodes, D.; York, T. A.

    2006-11-01

    Linear arrays of electrodes in conjunction with electrical impedance tomography have been used to spatially interrogate industrial processes that have only limited access for sensor placement. This paper explores the compromises that are to be expected when using a small number of vertically positioned linear arrays to facilitate 3D imaging using electrical tomography. A configuration with three arrays is found to give reasonable results when compared with a 'conventional' arrangement of circumferential electrodes. A single array yields highly localized sensitivity that struggles to image the whole space. Strategies have been tested on a small-scale version of a sludge settling application that is of relevance to the industrial sponsor. A new electrode excitation strategy, referred to here as 'planar cross drive', is found to give superior results to an extended version of the adjacent electrodes technique due to the improved uniformity of the sensitivity across the domain. Recommendations are suggested for parameters to inform the scale-up to industrial vessels.

  1. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  2. Contributions in compression of 3D medical images and 2D images

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  3. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    Khalid M. Hosny; Hafez, Mohamed A.

    2012-01-01

    An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add m...

  4. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  5. Knowledge Base Approach for 3D Objects Detection in Point Clouds Using 3D Processing and Specialists Knowledge

    Ben Hmida, Helmi; Cruz, Christophe; Boochs, Frank; Nicolle, Christophe

    2013-01-01

    International audience This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist's knowledge. Here, we share our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. Thi...

  6. EEG-based usability assessment of 3D shutter glasses

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  7. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  8. The effect of activity outside the field of view on image quality for a 3D LSO-based whole body PET/CT scanner.

    Matheoud, R; Secco, C; Della Monica, P; Leva, L; Sacchetti, G; Inglese, E; Brambilla, M

    2009-10-01

    The purpose of this study was to quantify the influence of outside field of view (FOV) activity concentration (A(c)(,out)) on the noise equivalent count rate (NECR), scatter fraction (SF) and image quality of a 3D LSO whole-body PET/CT scanner. The contrast-to-noise ratio (CNR) was the figure of merit used to characterize the image quality of PET scans. A modified International Electrotechnical Commission (IEC) phantom was used to obtain SF and counting rates similar to those found in average patients. A scatter phantom was positioned at the end of the modified IEC phantom to simulate an activity that extends beyond the scanner. The modified IEC phantom was filled with (18)F (11 kBq mL(-1)) and the spherical targets, with internal diameter (ID) ranging from 10 to 37 mm, had a target-to-background ratio of 10. PET images were acquired with background activity concentrations into the FOV (A(c)(,bkg)) about 11, 9.2, 6.6, 5.2 and 3.5 kBq mL(-1). The emission scan duration (ESD) was set to 1, 2, 3 and 4 min. The tube inside the scatter phantom was filled with activities to provide A(c)(,out) in the whole scatter phantom of zero, half, unity, twofold and fourfold the one of the modified IEC phantom. Plots of CNR versus the various parameters are provided. Multiple linear regression was employed to study the effects of A(c)(,out) on CNR, adjusted for the presence of variables (sphere ID, A(c)(,bkg) and ESD) related to CNR. The presence of outside FOV activity at the same concentration as the one inside the FOV reduces peak NECR of 30%. The increase in SF is marginal (1.2%). CNR diminishes significantly with increasing outside FOV activity, in the range explored. ESD and A(c)(,out) have a similar weight in accounting for CNR variance. Thus, an experimental law that adjusts the scan duration to the outside FOV activity can be devised. Recovery of CNR loss due to an elevated A(c)(,out) activity seems feasible by modulating the ESD in individual bed positions according to A

  9. The effect of activity outside the field of view on image quality for a 3D LSO-based whole body PET/CT scanner

    The purpose of this study was to quantify the influence of outside field of view (FOV) activity concentration (Ac,out) on the noise equivalent count rate (NECR), scatter fraction (SF) and image quality of a 3D LSO whole-body PET/CT scanner. The contrast-to-noise ratio (CNR) was the figure of merit used to characterize the image quality of PET scans. A modified International Electrotechnical Commission (IEC) phantom was used to obtain SF and counting rates similar to those found in average patients. A scatter phantom was positioned at the end of the modified IEC phantom to simulate an activity that extends beyond the scanner. The modified IEC phantom was filled with 18F (11 kBq mL-1) and the spherical targets, with internal diameter (ID) ranging from 10 to 37 mm, had a target-to-background ratio of 10. PET images were acquired with background activity concentrations into the FOV (Ac,bkg) about 11, 9.2, 6.6, 5.2 and 3.5 kBq mL-1. The emission scan duration (ESD) was set to 1, 2, 3 and 4 min. The tube inside the scatter phantom was filled with activities to provide Ac,out in the whole scatter phantom of zero, half, unity, twofold and fourfold the one of the modified IEC phantom. Plots of CNR versus the various parameters are provided. Multiple linear regression was employed to study the effects of Ac,out on CNR, adjusted for the presence of variables (sphere ID, Ac,bkg and ESD) related to CNR. The presence of outside FOV activity at the same concentration as the one inside the FOV reduces peak NECR of 30%. The increase in SF is marginal (1.2%). CNR diminishes significantly with increasing outside FOV activity, in the range explored. ESD and Ac,out have a similar weight in accounting for CNR variance. Thus, an experimental law that adjusts the scan duration to the outside FOV activity can be devised. Recovery of CNR loss due to an elevated Ac,out activity seems feasible by modulating the ESD in individual bed positions according to Ac,out.

  10. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  11. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  12. Virtual reality 3D headset based on DMD light modulators

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  13. Fast 3-d tomographic microwave imaging for breast cancer detection.

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  14. Fast 3D subsurface imaging with stepped-frequency GPR

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  15. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). (paper)

  16. 3D reconstruction of worn parts for flexible remanufacture based on robotic arc welding

    Yin Ziqiang; Zhang Guangjun; Gao Hongming; Wu Lin

    2010-01-01

    3D reconstruction of worn parts is the foundation for remanufacturing system based on robotic arc welding,because it can provide 3D geometric information for robot task plan.In this investigation,a nocwl 3D reconstruction system based on linear structured light vision sensing is developed,This system hardware consists of a MTC368-CB CCD camera,a MLH-645laser projector and a DH-CG300 image grabbing card.This system software is developed to control the image data capture.In order to reconstruct the 3D geometric information from the captured image,a two steps rapid calibration algorithm is proposed.The 3D reconstruction experiment shows a satisfactory result.

  17. Atlas Based Automatic Liver 3D CT Image Segmentation%基于图谱的肝脏CT三维自动分割研究

    刘伟; 贾富仓; 胡庆茂; 王俊

    2011-01-01

    目的 在肝脏外科手术或肝脏病理研究中,计算肝脏体积是重要步骤.由于肝脏外形复杂、临近组织灰度值与之接近等特点,肝脏的自动医学图像分割仍是医学图像处理中的难点之一.方法 本文采用图谱结合3D非刚性配准的方法,同时加入肝脏区域搜索算法,实现了鲁棒性较高的肝脏自动分割程序.首先,利用20套训练图像创建图谱,然后程序自动搜索肝脏区域,最后将图谱与待分割CT图像依次进行仿射配准和B样条配准.配准以后的图谱肝脏轮廓即可表示为目标肝脏分割轮廓,进而计算出肝脏体积.结果 评估结果显示,上述方法在肝脏体积误差方面表现出色,达到77分,但在局部(主要在肝脏尖端)出现较大的误差.结论 该方法分割临床肝脏CT图像具有可行性.%Objective Liver segmentation is an important step for the planning and navigation in liver surgery. Accurate, fast and robust automatic segmentation methods for clinical routine data are urgently needed. Because of the liver- s characteristics, such as the complexity of the external form, the similarity between the intensities of the liver and the tissues around it, automatic segmentation of the liver is one of the difficulties in medical image processing. Methods In this paper, 3D non-rigid registration from a refined atlas to liver CT images is used for segmentation. Firstly, twenty sets of training images are utilized to create an atlas. Then the liver initial region is searched and located automatically. After that threshold filtering is used to enhance the robustness of segmentation. Finally, this atlas is non-rigidly registered to the liver in CT images with affine and B-spline in succession. The registered segmentation of liver- s atlas represented the segmentation of the target liver, and then the liver volume was calculated. Results The evaluation show that the proposed method works well in liver volume error, with the 77 score

  18. 3-D MR imaging of ectopia vasa deferentia

    Goenka, Ajit Harishkumar; Parihar, Mohan; Sharma, Raju; Gupta, Arun Kumar [All India Institute of Medical Sciences (AIIMS), Department of Radiology, New Delhi (India); Bhatnagar, Veereshwar [All India Institute of Medical Sciences (AIIMS), Department of Paediatric Surgery, New Delhi (India)

    2009-11-15

    Ectopia vasa deferentia is a complex anomaly characterized by abnormal termination of the urethral end of the vas deferens into the urinary tract due to an incompletely understood developmental error of the distal Wolffian duct. Associated anomalies of the lower gastrointestinal tract and upper urinary tract are also commonly present due to closely related embryological development. Although around 32 cases have been reported in the literature, the MR appearance of this condition has not been previously described. We report a child with high anorectal malformation who was found to have ectopia vasa deferentia, crossed fused renal ectopia and type II caudal regression syndrome on MR examination. In addition to the salient features of this entity on reconstructed MR images, the important role of 3-D MRI in establishing an unequivocal diagnosis and its potential in facilitating individually tailored management is also highlighted. (orig.)

  19. Characterizing and reducing crosstalk in printed anaglyph stereoscopic 3D images

    Woods, Andrew J.; Harris, Chris R.; Leggo, Dean B.; Rourke, Tegan M.

    2013-04-01

    The anaglyph three-dimensional (3D) method is a widely used technique for presenting stereoscopic 3D images. Its primary advantages are that it will work on any full-color display and only requires that the user view the anaglyph image using a pair of anaglyph 3D glasses with usually one lens tinted red and the other lens tinted cyan. A common image quality problem of anaglyph 3D images is high levels of crosstalk-the incomplete isolation of the left and right image channels such that each eye sees a "ghost" of the opposite perspective view. In printed anaglyph images, the crosstalk levels are often very high-much higher than when anaglyph images are presented on emissive displays. The sources of crosstalk in printed anaglyph images are described and a simulation model is developed that allows the amount of printed anaglyph crosstalk to be estimated based on the spectral characteristics of the light source, paper, ink set, and anaglyph glasses. The model is validated using a visual crosstalk ranking test, which indicates good agreement. The model is then used to consider scenarios for the reduction of crosstalk in printed anaglyph systems and finds a number of options that are likely to reduce crosstalk considerably.

  20. Dosimetric analysis of 3D image-guided HDR brachytherapy planning for the treatment of cervical cancer: is point A-based dose prescription still valid in image-guided brachytherapy?

    Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M Saiful

    2011-01-01

    The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 ± 4.3 Gy. This is significantly higher (p IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities. PMID:20488690

  1. GPU-accelerated denoising of 3D magnetic resonance images

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  2. Powder-based 3D printing for bone tissue engineering.

    Brunello, G; Sivolella, S; Meneghello, R; Ferroni, L; Gardin, C; Piattelli, A; Zavan, B; Bressan, E

    2016-01-01

    Bone tissue engineered 3-D constructs customized to patient-specific needs are emerging as attractive biomimetic scaffolds to enhance bone cell and tissue growth and differentiation. The article outlines the features of the most common additive manufacturing technologies (3D printing, stereolithography, fused deposition modeling, and selective laser sintering) used to fabricate bone tissue engineering scaffolds. It concentrates, in particular, on the current state of knowledge concerning powder-based 3D printing, including a description of the properties of powders and binder solutions, the critical phases of scaffold manufacturing, and its applications in bone tissue engineering. Clinical aspects and future applications are also discussed. PMID:27086202

  3. 3D stereotaxis for epileptic foci through integrating MR imaging with neurological electrophysiology data

    Objective: To improve the accuracy of the epilepsy diagnoses by integrating MR image from PACS with data from neurological electrophysiology. The integration is also very important for transmiting diagnostic information to 3D TPS of radiotherapy. Methods: The electroencephalogram was redisplayed by EEG workstation, while MR image was reconstructed by Brainvoyager software. 3D model of patient brain was built up by combining reconstructed images with electroencephalogram data in Base 2000. 30 epileptic patients (18 males and 12 females) with their age ranged from 12 to 54 years were confirmed by using the integrated MR images and the data from neurological electrophysiology and their 3D stereolocating. Results: The corresponding data in 3D model could show the real situation of patients' brain and visually locate the precise position of the focus. The suddessful rate of 3D guided operation was greatly improved, and the number of epileptic onset was markedly decreased. The epilepsy was stopped for 6 months in 8 of the 30 patients. Conclusion: The integration of MR image and information of neurological electrophysiology can improve the diagnostic level for epilepsy, and it is crucial for imp roving the successful rate of manipulations and the epilepsy analysis. (authors)

  4. Simulation and experimental studies of three-dimensional (3D) image reconstruction from insufficient sampling data based on compressed-sensing theory for potential applications to dental cone-beam CT

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient sampling data. In computed tomography (CT), for example, image reconstruction from sparse views and/or limited-angle (<360°) views would enable fast scanning with reduced imaging doses to the patient. In this study, we investigated and implemented a reconstruction algorithm based on the compressed-sensing (CS) theory, which exploits the sparseness of the gradient image with substantially high accuracy, for potential applications to low-dose, high-accurate dental cone-beam CT (CBCT). We performed systematic simulation works to investigate the image characteristics and also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images of superior accuracy from insufficient sampling data and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from insufficient data indicate that the CS-based algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality

  5. Simulation and experimental studies of three-dimensional (3D) image reconstruction from insufficient sampling data based on compressed-sensing theory for potential applications to dental cone-beam CT

    Je, U.K.; Lee, M.S.; Cho, H.S., E-mail: hscho1@yonsei.ac.kr; Hong, D.K.; Park, Y.O.; Park, C.K.; Cho, H.M.; Choi, S.I.; Woo, T.H.

    2015-06-01

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient sampling data. In computed tomography (CT), for example, image reconstruction from sparse views and/or limited-angle (<360°) views would enable fast scanning with reduced imaging doses to the patient. In this study, we investigated and implemented a reconstruction algorithm based on the compressed-sensing (CS) theory, which exploits the sparseness of the gradient image with substantially high accuracy, for potential applications to low-dose, high-accurate dental cone-beam CT (CBCT). We performed systematic simulation works to investigate the image characteristics and also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images of superior accuracy from insufficient sampling data and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from insufficient data indicate that the CS-based algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality.

  6. Performance assessment of 3D surface imaging technique for medical imaging applications

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  7. High resolution 3D imaging of synchrotron generated microbeams

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  8. High resolution 3D imaging of synchrotron generated microbeams

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery

  9. Fast fully 3-D image reconstruction in PET using planograms.

    Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W

    2004-04-01

    We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067

  10. 3D Slicer as an image computing platform for the Quantitative Imaging Network.

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

    2012-11-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  11. 3D Visual SLAM Based on Multiple Iterative Closest Point

    Chunguang Li; Chongben Tao; Guodong Liu

    2015-01-01

    With the development of novel RGB-D visual sensors, data association has been a basic problem in 3D Visual Simultaneous Localization and Mapping (VSLAM). To solve the problem, a VSLAM algorithm based on Multiple Iterative Closest Point (MICP) is presented. By using both RGB and depth information obtained from RGB-D camera, 3D models of indoor environment can be reconstructed, which provide extensive knowledge for mobile robots to accomplish tasks such as VSLAM and Human-Robot Interaction. Due...

  12. Pattern of cerebral hyperperfusion in Alzheimer's disease and amnestic mild cognitive impairment using voxel-based analysis of 3D arterial spin-labeling imaging: initial experience

    Ding B

    2014-03-01

    Full Text Available Bei Ding,1 Hua-wei Ling,1 Yong Zhang,2 Juan Huang,1 Huan Zhang,1 Tao Wang,3 Fu Hua Yan11Department of Radiology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, 2Applied Science Laboratory, GE Healthcare, 3Department of Gerontology, Shanghai Mental Health Center, Shanghai, People's Republic of ChinaPurpose: A three-dimensional (3D continuous pulse arterial spin labeling (ASL technique was used to investigate cerebral blood flow (CBF changes in patients with Alzheimer's disease (AD, amnestic mild cognitive impairment (aMCI, and age- and sex-matched healthy controls.Materials and methods: Three groups were recruited for comparison, 24 AD patients, 17 MCI patients, and 21 age- and sex-matched control subjects. Three-dimensional ASL scans covering the entire brain were acquired with a 3.0 T magnetic resonance scanner. Spatial processing was performed with statistical parametric mapping 8. A second-level one-way analysis of variance analysis (threshold at P<0.05 was performed on the preprocessed ASL data. An average whole-brain CBF for each subject was also included as group-level covariates for the perfusion data, to control for individual CBF variations.Results: Significantly increased CBF was detected in bilateral frontal lobes and right temporal subgyral regions in aMCI compared with controls. When comparing AD with aMCI, the major hyperperfusion regions were the right limbic lobe and basal ganglia regions, including the putamen, caudate, lentiform nucleus, and thalamus, and hypoperfusion was found in the left medial frontal lobe, parietal cortex, the right middle temporo-occipital lobe, and particularly, the left anterior cingulate gyrus. We also found decreased CBF in the bilateral temporo-parieto-occipital cortices and left limbic lobe in AD patients, relative to the control group. aMCI subjects showed decreased blood flow in the left occipital lobe, bilateral inferior temporal cortex, and right middle temporal cortex

  13. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  14. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  15. Note: An improved 3D imaging system for electron-electron coincidence measurements

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen, E-mail: wli@chem.wayne.edu [Department of Chemistry, Wayne State University, Detroit, Michigan 48202 (United States)

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  16. ROIC for gated 3D imaging LADAR receiver

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  17. A new method of 3D scene recognition from still images

    Zheng, Li-ming; Wang, Xing-song

    2014-04-01

    Most methods of monocular visual three dimensional (3D) scene recognition involve supervised machine learning. However, these methods often rely on prior knowledge. Specifically, they learn the image scene as part of a training dataset. For this reason, when the sampling equipment or scene is changed, monocular visual 3D scene recognition may fail. To cope with this problem, a new method of unsupervised learning for monocular visual 3D scene recognition is here proposed. First, the image is made using superpixel segmentation based on the CIELAB color space values L, a, and b and on the coordinate values x and y of pixels, forming a superpixel image with a specific density. Second, a spectral clustering algorithm based on the superpixels' color characteristics and neighboring relationships was used to reduce the dimensions of the superpixel image. Third, the fuzzy distribution density functions representing sky, ground, and façade are multiplied with the segment pixels, where the expectations of these segments are obtained. A preliminary classification of sky, ground, and façade is generated in this way. Fourth, the most accurate classification images of sky, ground, and façade were extracted through the tier-1 wavelet sampling and Manhattan direction feature. Finally, a depth perception map is generated based on the pinhole imaging model and the linear perspective information of ground surface. Here, 400 images of Make3D Image data from the Cornell University website were used to test the algorithm. The experimental results showed that this unsupervised learning method provides a more effective monocular visual 3D scene recognition model than other methods.

  18. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  19. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  20. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  1. Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique

    DU Jie; CHEN Xiaoyong; FumioYamazaki

    2003-01-01

    This paper demonstrates an experiment on the generation of 3D virtual geographic environment on the basis of experimental flight laser scanning data by a set of algorithms and methods that were developed to automatically interpret range images for extracting geo-spatial features and then to reconstruct geo-objects. The algorithms and methods for the interpretation and modeling of laser scanner data include triangulated-irregular-network (TIN)-based range image interpolation ; mathematical-morphology(MM)-based range image filtering,feature extraction and range image segmentation, feature generalization and optimization, 3D objects reconstruction and modeling; computergraphics (CG)-based visualization and animation of geographic virtual reality environment.

  2. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  3. AR based ornament design system for 3D printing

    Hiroshi Aoki

    2015-01-01

    Full Text Available In recent years, 3D printers have become popular as a means of outputting geometries designed on CAD or 3D graphics systems. However, the complex user interfaces of standard 3D software can make it difficult for ordinary consumers to design their own objects. Furthermore, models designed on 3D graphics software often have geometrical problems that make them impossible to output on a 3D printer. We propose a novel AR (augmented reality 3D modeling system with an air-spray like interface. We also propose a new data structure (octet voxel for representing designed models in such a way that the model is guaranteed to be a complete solid. The target shape is based on a regular polyhedron, and the octet voxel representation is suitable for designing geometrical objects having the same symmetries as the base regular polyhedron. Finally, we conducted a user test and confirmed that users can intuitively design their own ornaments in a short time with a simple user interface.

  4. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  5. Fast 3D T1-weighted brain imaging at 3 Tesla with modified 3D FLASH sequence

    Longitudinal relaxation times (T1) of white and gray matter become close at high magnetic field. Therefore, classical T1 sensitive methods, like spoiled FLASH fail to give a sufficient contrast in human brain imaging at 3 Tesla. An excellent T1 contrast can be achieved at high field by gradient echo imaging with a preparatory inversion pulse. The inversion recovery (IR) preparation can be combined with a fast 2D gradient echo scans. In this paper we present an application of this technique to rapid 3-dimensional imaging. New technique called 3D SIR FLASH was implemented on Burker MSLX system equipped with a 3T, 90 cm horizontal bore magnet working in Centre Hospitalier in Rouffach, France. The new technique was used for comparison of MRI images of healthy volunteers obtained with a traditional 3D imaging. White and gray matter are clearly distinguishable when 3D SIR FLASH is used. The total acquisition time for 128x128x128 image was 5 minutes. Three dimensional visualization with facet representation of surfaces and oblique sections was done off-line on the INDIGO Extreme workstation. New technique is widely used in FORENAP, Centre Hospitalier in Reuffach, Alsace. (author)

  6. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  7. Multimodal Registration and Fusion for 3D Thermal Imaging

    Moulay A. Akhloufi; Benjamin Verney

    2015-01-01

    3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind ...

  8. Web tools for large-scale 3D biological images and atlases

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  9. Computer assisted determination of acetabular cup orientation using 2D-3D image registration

    2D-3D image-based registration methods have been developed to measure acetabular cup orientation after total hip arthroplasty (THA). These methods require registration of both the prosthesis and the CT images to 2D radiographs and compute implant position with respect to a reference. The application of these methods is limited in clinical practice due to two limitations: (1) the requirement of a computer-aided design (CAD) model of the prosthesis, which may be unavailable due to the proprietary concerns of the manufacturer, and (2) the requirement of either multiple radiographs or radiograph-specific calibration, usually unavailable for retrospective studies. In this paper, we propose a new method to address these limitations. A new formulation for determination of post-operative cup orientation, which couples a radiographic measurement with 2D-3D image matching, was developed. In our formulation, the radiographic measurement can be obtained with known methods so that the challenge lies in the 2D-3D image matching. To solve this problem, a hybrid 2D-3D registration scheme combining a landmark-to-ray 2D-3D alignment with a robust intensity-based 2D-3D registration was used. The hybrid 2D-3D registration scheme allows computing both the post-operative cup orientation with respect to an anatomical reference and the pelvic tilt and rotation with respect to the X-ray imaging table/plate. The method was validated using 2D adult cadaver hips. Using the hybrid 2D-3D registration scheme, our method showed a mean accuracy of 1.0 ± 0.7 (range from 0.1 to 2.0 ) for inclination and 1.7 ± 1.2 (range from 0.0 to 3.9 ) for anteversion, taking the measurements from post-operative CT images as ground truths. Our new solution formulation and the hybrid 2D-3D registration scheme facilitate estimation of post-operative cup orientation and measurement of pelvic tilt and rotation. (orig.)

  10. Computer assisted determination of acetabular cup orientation using 2D-3D image registration

    Zheng, Guoyan; Zhang, Xuan [University of Bern, Institute for Surgical Technology and Biomechanics, Bern (Switzerland)

    2010-09-15

    2D-3D image-based registration methods have been developed to measure acetabular cup orientation after total hip arthroplasty (THA). These methods require registration of both the prosthesis and the CT images to 2D radiographs and compute implant position with respect to a reference. The application of these methods is limited in clinical practice due to two limitations: (1) the requirement of a computer-aided design (CAD) model of the prosthesis, which may be unavailable due to the proprietary concerns of the manufacturer, and (2) the requirement of either multiple radiographs or radiograph-specific calibration, usually unavailable for retrospective studies. In this paper, we propose a new method to address these limitations. A new formulation for determination of post-operative cup orientation, which couples a radiographic measurement with 2D-3D image matching, was developed. In our formulation, the radiographic measurement can be obtained with known methods so that the challenge lies in the 2D-3D image matching. To solve this problem, a hybrid 2D-3D registration scheme combining a landmark-to-ray 2D-3D alignment with a robust intensity-based 2D-3D registration was used. The hybrid 2D-3D registration scheme allows computing both the post-operative cup orientation with respect to an anatomical reference and the pelvic tilt and rotation with respect to the X-ray imaging table/plate. The method was validated using 2D adult cadaver hips. Using the hybrid 2D-3D registration scheme, our method showed a mean accuracy of 1.0 {+-} 0.7 (range from 0.1 to 2.0 ) for inclination and 1.7 {+-} 1.2 (range from 0.0 to 3.9 ) for anteversion, taking the measurements from post-operative CT images as ground truths. Our new solution formulation and the hybrid 2D-3D registration scheme facilitate estimation of post-operative cup orientation and measurement of pelvic tilt and rotation. (orig.)

  11. 3D nonrigid medical image registration using a new information theoretic measure

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  12. 3D nonrigid medical image registration using a new information theoretic measure

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen–Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measu