WorldWideScience

Sample records for 3d image based

  1. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  2. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    Yakang Dai; Jian Zheng; Yuetao Yang; Duojie Kuai; Xiaodong Yang

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume render...

  3. Image based 3D city modeling : Comparative study

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  4. 3D Motion Parameters Determination Based on Binocular Sequence Images

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  5. 3D Medical Image Segmentation Based on Rough Set Theory

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  6. Optical 3D watermark based digital image watermarking for telemedicine

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  7. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  8. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  9. Ultra-realistic 3-D imaging based on colour holography

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  10. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed met...

  11. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  12. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-06-01

    Full Text Available Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.

  13. Image-Based 3D Face Modeling System

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  14. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  15. Medical image analysis of 3D CT images based on extensions of Haralick texture features

    Tesař, Ludvík; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S.

    2008-01-01

    Roč. 32, č. 6 (2008), s. 513-520. ISSN 0895-6111 R&D Projects: GA AV ČR 1ET101050403; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * Gaussian mixture model * 3D image analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.192, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/tesar-medical image analysis of 3d ct image s based on extensions of haralick texture features.pdf

  16. Four-view stereoscopic imaging and display system for web-based 3D image communication

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  17. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    S. P. Singh; K. Jain; V. R. Mandla

    2014-01-01

    3D city model is a digital representation of the Earth's surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based m...

  18. 3D Image Sensor based on Parallax Motion

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  19. 3D Wavelet-based Fusion Techniques for Biomedical Imaging

    Rubio Guivernau, José Luis

    2012-01-01

    Hoy en día las técnicas de adquisición de imágenes tridimensionales son comunes en diversas áreas, pero cabe destacar la relevancia que han adquirido en el ámbito de la imagen biomédica, dentro del cual encontramos una amplia gama de técnicas como la microscopía confocal, microscopía de dos fotones, microscopía de fluorescencia mediante lámina de luz, resonancia magnética nuclear, tomografía por emisión de positrones, tomografía de coherencia óptica, ecografía 3D y un largo etcétera. Un denom...

  20. Heterodyne 3D ghost imaging

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  1. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  2. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  3. Study of bone implants based on 3D images

    Grau, S; Ayala Vallespí, M. Dolors; Tost Pardell, Daniela; Miño, N.; Muñoz, F.; González, A

    2005-01-01

    New medical input technologies together with computer graphics modelling and visualization software have opened a new track for biomedical sciences: the so-called in-silice experimentation, in which analysis and measurements are done on computer graphics models constructed on the basis of medical images, complementing the traditional in-vivo and in-vitro experimental methods. In this paper, we describe an in-silice experiment to evaluate bio-implants f...

  4. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  5. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  6. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  7. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    Li, X. W.; Kim, D. H.; Cho, S. J.; Kim, S. T.

    2013-01-01

    A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII) and linear-complemented maximum- length cellular automata (LC-MLCA) to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA) recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-...

  8. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    X. W. Li

    2013-08-01

    Full Text Available A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII and linear-complemented maximum- length cellular automata (LC-MLCA to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-MLCA algorithm. When decrypting the encrypted image, the 2-D EIA is recovered by the LC-MLCA. Using the computational integral imaging reconstruction (CIIR technique and a 3-D object is subsequently reconstructed on the output plane from the 2-D recovered EIA. Because the 2-D EIA is composed of a number of elemental images having their own perspectives of a 3-D image, even if the encrypted image is seriously harmed, the 3-D image can be successfully reconstructed only with partial data. To verify the usefulness of the proposed algorithm, we perform computational experiments and present the experimental results for various attacks. The experiments demonstrate that the proposed encryption method is valid and exhibits strong robustness and security.

  9. SEGMENTATION OF UAV-BASED IMAGES INCORPORATING 3D POINT CLOUD INFORMATION

    A. Vetrivel

    2015-03-01

    Full Text Available Numerous applications related to urban scene analysis demand automatic recognition of buildings and distinct sub-elements. For example, if LiDAR data is available, only 3D information could be leveraged for the segmentation. However, this poses several risks, for instance, the in-plane objects cannot be distinguished from their surroundings. On the other hand, if only image based segmentation is performed, the geometric features (e.g., normal orientation, planarity are not readily available. This renders the task of detecting the distinct sub-elements of the building with similar radiometric characteristic infeasible. In this paper the individual sub-elements of buildings are recognized through sub-segmentation of the building using geometric and radiometric characteristics jointly. 3D points generated from Unmanned Aerial Vehicle (UAV images are used for inferring the geometric characteristics of roofs and facades of the building. However, the image-based 3D points are noisy, error prone and often contain gaps. Hence the segmentation in 3D space is not appropriate. Therefore, we propose to perform segmentation in image space using geometric features from the 3D point cloud along with the radiometric features. The initial detection of buildings in 3D point cloud is followed by the segmentation in image space using the region growing approach by utilizing various radiometric and 3D point cloud features. The developed method was tested using two data sets obtained with UAV images with a ground resolution of around 1-2 cm. The developed method accurately segmented most of the building elements when compared to the plane-based segmentation using 3D point cloud alone.

  10. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  11. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  12. Midsagittal plane extraction from brain images based on 3D SIFT

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°. (paper)

  13. Single-pixel 3D imaging with time-based depth resolution

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  14. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  15. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  16. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  17. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  18. A web-based solution for 3D medical image visualization

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  19. Computer-Assisted Hepatocellular Carcinoma Ablation Planning Based on 3-D Ultrasound Imaging.

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Guan, Peishan; Li, Liu-Jun; Zheng, Rongqin

    2016-08-01

    To evaluate computer-assisted hepatocellular carcinoma (HCC) ablation planning based on 3-D ultrasound, 3-D ultrasound images of 60 HCC lesions from 58 patients were obtained and transferred to a research toolkit. Compared with virtual manual ablation planning (MAP), virtual computer-assisted ablation planning (CAP) consumed less time and needle insertion numbers and exhibited a higher rate of complete tumor coverage and lower rate of critical structure injury. In MAP, junior operators used less time, but had more critical structure injury than senior operators. For large lesions, CAP performed better than MAP. For lesions near critical structures, CAP resulted in better outcomes than MAP. Compared with MAP, CAP based on 3-D ultrasound imaging was more effective and achieved a higher rate of complete tumor coverage and a lower rate of critical structure injury; it is especially useful for junior operators and with large lesions, and lesions near critical structures. PMID:27126243

  20. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  1. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM. PMID:27277277

  2. 3D vector flow imaging

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  3. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  4. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  5. 3D structural analysis of proteins using electrostatic surfaces based on image segmentation

    Vlachakis, Dimitrios; Champeris Tsaniras, Spyridon; Tsiliki, Georgia; Megalooikonomou, Vasileios; Kossida, Sophia

    2016-01-01

    Herein, we present a novel strategy to analyse and characterize proteins using protein molecular electro-static surfaces. Our approach starts by calculating a series of distinct molecular surfaces for each protein that are subsequently flattened out, thus reducing 3D information noise. RGB images are appropriately scaled by means of standard image processing techniques whilst retaining the weight information of each protein’s molecular electrostatic surface. Then homogeneous areas in the protein surface are estimated based on unsupervised clustering of the 3D images, while performing similarity searches. This is a computationally fast approach, which efficiently highlights interesting structural areas among a group of proteins. Multiple protein electrostatic surfaces can be combined together and in conjunction with their processed images, they can provide the starting material for protein structural similarity and molecular docking experiments.

  6. Superimposing of virtual graphics and real image based on 3D CAD information

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  7. A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging

    Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.

    2015-03-01

    Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.

  8. Sample based 3D face reconstruction from a single frontal image by adaptive locally linear embedding

    ZHANG Jian; ZHUANG Yue-ting

    2007-01-01

    In this paper, we propose a highly automatic approach for 3D photorealistic face reconstruction from a single frontal image. The key point of our work is the implementation of adaptive manifold learning approach. Beforehand, an active appearance model (AAM) is trained for automatic feature extraction and adaptive locally linear embedding (ALLE) algorithm is utilized to reduce the dimensionality of the 3D database. Then, given an input frontal face image, the corresponding weights between 3D samples and the image are synthesized adaptively according to the AAM selected facial features. Finally, geometry reconstruction is achieved by linear weighted combination of adaptively selected samples. Radial basis function (RBF) is adopted to map facial texture from the frontal image to the reconstructed face geometry. The texture of invisible regions between the face and the ears is interpolated by sampling from the frontal image. This approach has several advantages: (1) Only a single frontal face image is needed for highly automatic face reconstruction; (2) Compared with former works, our reconstruction approach provides higher accuracy; (3) Constraint based RBF texture mapping provides natural appearance for reconstructed face.

  9. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  10. Model based 3D segmentation and OCT image undistortion of percutaneous implants.

    Müller, Oliver; Donner, Sabine; Klinder, Tobias; Dragon, Ralf; Bartsch, Ivonne; Witte, Frank; Krüger, Alexander; Heisterkamp, Alexander; Rosenhahn, Bodo

    2011-01-01

    Optical Coherence Tomography (OCT) is a noninvasive imaging technique which is used here for in vivo biocompatibility studies of percutaneous implants. A prerequisite for a morphometric analysis of the OCT images is the correction of optical distortions caused by the index of refraction in the tissue. We propose a fully automatic approach for 3D segmentation of percutaneous implants using Markov random fields. Refraction correction is done by using the subcutaneous implant base as a prior for model based estimation of the refractive index using a generalized Hough transform. Experiments show the competitiveness of our algorithm towards manual segmentations done by experts. PMID:22003731

  11. Modifications in SIFT-based 3D reconstruction from image sequence

    Wei, Zhenzhong; Ding, Boshen; Wang, Wei

    2014-11-01

    In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.

  12. A neural network based 3D/3D image registration quality evaluator for the head-and-neck patient setup in the absence of a ground truth

    Purpose: To develop a neural network based registration quality evaluator (RQE) that can identify unsuccessful 3D/3D image registrations for the head-and-neck patient setup in radiotherapy. Methods: A two-layer feed-forward neural network was used as a RQE to classify 3D/3D rigid registration solutions as successful or unsuccessful based on the features of the similarity surface near the point-of-solution. The supervised training and test data sets were generated by rigidly registering daily cone-beam CTs to the treatment planning fan-beam CTs of six patients with head-and-neck tumors. Two different similarity metrics (mutual information and mean-squared intensity difference) and two different types of image content (entire image versus bony landmarks) were used. The best solution for each registration pair was selected from 50 optimizing attempts that differed only by the initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error threshold to determine whether that solution was successful or not. The supervised training was then used to train the RQE. The performance of the RQE was evaluated using the test data set that consisted of registration results that were not used in training. Results: The RQE constructed using the mutual information had very good performance when tested using the test data sets, yielding the sensitivity, the specificity, the positive predictive value, and the negative predictive value in the ranges of 0.960-1.000, 0.993-1.000, 0.983-1.000, and 0.909-1.000, respectively. Adding a RQE into a conventional 3D/3D image registration system incurs only about 10%-20% increase of the overall processing time. Conclusions: The authors' patient study has demonstrated very good performance of the proposed RQE when used with the mutual information in identifying unsuccessful 3D/3D registrations for daily patient setup. The classifier had

  13. FUSION OF AIRBORNE AND TERRESTRIAL IMAGE-BASED 3D MODELLING FOR ROAD INFRASTRUCTURE MANAGEMENT – VISION AND FIRST EXPERIMENTS

    S. Nebiker; S. Cavegn; Eugster, H.; Laemmer, K.; J. Markram; Wagner, R.

    2012-01-01

    In this paper we present the vision and proof of concept of a seamless image-based 3d modelling approach fusing airborne and mobile terrestrial imagery. The proposed fusion relies on dense stereo matching for extracting 3d point clouds which – in combination with the original airborne and terrestrial stereo imagery – create a rich 3d geoinformation and 3d measuring space. For the seamless exploitation of this space we propose using a new virtual globe technology integrating the ai...

  14. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  15. 3D Chaotic Functions for Image Encryption

    Pawan N. Khade

    2012-05-01

    Full Text Available This paper proposes the chaotic encryption algorithm based on 3D logistic map, 3D Chebyshev map, and 3D, 2D Arnolds cat map for color image encryption. Here the 2D Arnolds cat map is used for image pixel scrambling and 3D Arnolds cat map is used for R, G, and B component substitution. 3D Chebyshev map is used for key generation and 3D logistic map is used for image scrambling. The use of 3D chaotic functions in the encryption algorithm provide more security by using the, shuffling and substitution to the encrypted image. The Chebyshev map is used for public key encryption and distribution of generated private keys.

  16. Quantitative analysis of the central-chest lymph nodes based on 3D MDCT image data

    Lu, Kongkuo; Bascom, Rebecca; Mahraj, Rickhesvar P. M.; Higgins, William E.

    2009-02-01

    Lung cancer is the leading cause of cancer death in the United States. In lung-cancer staging, central-chest lymph nodes and associated nodal stations, as observed in three-dimensional (3D) multidetector CT (MDCT) scans, play a vital role. However, little work has been done in relation to lymph nodes, based on MDCT data, due to the complicated phenomena that give rise to them. Using our custom computer-based system for 3D MDCT-based pulmonary lymph-node analysis, we conduct a detailed study of lymph nodes as depicted in 3D MDCT scans. In this work, the Mountain lymph-node stations are automatically defined by the system. These defined stations, in conjunction with our system's image processing and visualization tools, facilitate lymph-node detection, classification, and segmentation. An expert pulmonologist, chest radiologist, and trained technician verified the accuracy of the automatically defined stations and indicated observable lymph nodes. Next, using semi-automatic tools in our system, we defined all indicated nodes. Finally, we performed a global quantitative analysis of the characteristics of the observed nodes and stations. This study drew upon a database of 32 human MDCT chest scans. 320 Mountain-based stations (10 per scan) and 852 pulmonary lymph nodes were defined overall from this database. Based on the numerical results, over 90% of the automatically defined stations were deemed accurate. This paper also presents a detailed summary of central-chest lymph-node characteristics for the first time.

  17. Image-based Virtual Exhibit and Its Extension to 3D

    Ming-Min Zhang; Zhi-Geng Pan; Li-Feng Ren; Peng Wang

    2007-01-01

    In this paper we introduce an image-based virtual exhibition system especially for clothing product. It can provide a powerful material substitution function, which is very useful for customization clothing-built. A novel color substitution algorithm and two texture morphing methods are designed to ensure realistic substitution result. To extend it to 3D, we need to do the model reconstruction based on photos. Thus we present an improved method for modeling human body. It deforms a generic model with shape details extracted from pictures to generate a new model. Our method begins with model image generation followed by silhouette extraction and segmentation. Then it builds a mapping between pixels inside every pair of silhouette segments in the model image and in the picture. Our mapping algorithm is based on a slice space representation that conforms to the natural features of human body.

  18. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    HU; Hai-bo(

    2001-01-01

    [1]Maintz J B, Viergever M A. A survey of medical image registration[J]. Medical Image Analysis, 1998, 3(1):1~37.[2]Collignon A. Automated multi-modality image registration based on information theory[J]. Computational Imaging and vision, 1995, 3:263~274.[3]Eberl S, Braun M. Intra-and inter-modality registration of functional and anatomical clinical images[A]. Pham B, et al. eds. New Approaches in Medical Image Analysis, SPIE 3747[C].[s.l.]:[s.n.], 1999. 102~114.[4]Lau Y H, Braun M, Hutton B F. Non-rigid 3D image registration using regionally constrained matching and the correlation ratio[A]. Pernus F, et al.eds. Biomedical Image Registration, Proc Int Workshop[C]. Bled, Slovenia, 1999. 137~148.[5]Wells Ⅲ W M, Viola P, Atsumi H, et al. Multi-modal volume registration by maximization of mutual information[J]. Medical Image Analysis, 1996, 1(1):35~51.[6]Feldmar J, Ayache N. Rigid, affine and locally affine registration of free-form surfaces[J]. Int J of Computer Vision, 1996, 23(3):97~104.

  19. Combination of intensity-based image registration with 3D simulation in radiation therapy

    Li, Pan; Malsch, Urban; Bendl, Rolf

    2008-09-01

    Modern techniques of radiotherapy like intensity modulated radiation therapy (IMRT) make it possible to deliver high dose to tumors of different irregular shapes at the same time sparing surrounding healthy tissue. However, internal tumor motion makes precise calculation of the delivered dose distribution challenging. This makes analysis of tumor motion necessary. One way to describe target motion is using image registration. Many registration methods have already been developed previously. However, most of them belong either to geometric approaches or to intensity approaches. Methods which take account of anatomical information and results of intensity matching can greatly improve the results of image registration. Based on this idea, a combined method of image registration followed by 3D modeling and simulation was introduced in this project. Experiments were carried out for five patients 4DCT lung datasets. In the 3D simulation, models obtained from images of end-exhalation were deformed to the state of end-inhalation. Diaphragm motions were around -25 mm in the cranial-caudal (CC) direction. To verify the quality of our new method, displacements of landmarks were calculated and compared with measurements in the CT images. Improvement of accuracy after simulations has been shown compared to the results obtained only by intensity-based image registration. The average improvement was 0.97 mm. The average Euclidean error of the combined method was around 3.77 mm. Unrealistic motions such as curl-shaped deformations in the results of image registration were corrected. The combined method required less than 30 min. Our method provides information about the deformation of the target volume, which we need for dose optimization and target definition in our planning system.

  20. 3D nanostructure reconstruction based on the SEM imaging principle, and applications

    This paper addresses a novel 3D reconstruction method for nanostructures based on the scanning electron microscopy (SEM) imaging principle. In this method, the shape from shading (SFS) technique is employed, to analyze the gray-scale information of a single top-view SEM image which contains all the visible surface information, and finally to reconstruct the 3D surface morphology. It offers not only unobstructed observation from various angles but also the exact physical dimensions of nanostructures. A convenient and commercially available tool (NanoViewer) is developed based on this method for nanostructure analysis and characterization of properties. The reconstruction result coincides well with the SEM nanostructure image and is verified in different ways. With the extracted structure information, subsequent research of the nanostructure can be carried out, such as roughness analysis, optimizing properties by structure improvement and performance simulation with a reconstruction model. Efficient, practical and non-destructive, the method will become a powerful tool for nanostructure surface observation and characterization. (paper)

  1. 3D Reconstruction of NMR Images

    Peter Izak; Milan Smetana; Libor Hargas; Miroslav Hrianka; Pavol Spanik

    2007-01-01

    This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  2. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  3. Web-based interactive 2D/3D medical image processing and visualization software.

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. PMID:20022133

  4. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Xing Zhao; Jing-jing Hu; Peng Zhang

    2009-01-01

    Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs) has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed...

  5. 3D Imager and Method for 3D imaging

    Kumar, P.; Staszewski, R.; Charbon, E.

    2013-01-01

    3D imager comprising at least one pixel, each pixel comprising a photodetectorfor detecting photon incidence and a time-to-digital converter system configured for referencing said photon incidence to a reference clock, and further comprising a reference clock generator provided for generating the re

  6. Improving low-dose cardiac CT images using 3D sparse representation based processing

    Shi, Luyao; Chen, Yang; Luo, Limin

    2015-03-01

    Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.

  7. Method for 3D Image Representation with Reducing the Number of Frames based on Characteristics of Human Eyes

    Kohei Arai

    2016-10-01

    Full Text Available Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency. Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6. Also, it can represent the 3D depth through visual perceptions. Thus, real time volume rendering can be done with the proposed method.

  8. 3D ultrafast ultrasound imaging in vivo

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability. (fast track communication)

  9. Micro-lens array based 3-D color image encryption using the combination of gravity model and Arnold transform

    You, Suping; Lu, Yucheng; Zhang, Wei; Yang, Bo; Peng, Runling; Zhuang, Songlin

    2015-11-01

    This paper proposes a 3-D image encryption scheme based on micro-lens array. The 3-D image can be reconstructed by applying the digital refocusing algorithm to the picked-up light field. To improve the security of the cryptosystem, the Arnold transform and the Gravity Model based image encryption method are employed. Experiment results demonstrate the high security in key space of the proposed encryption scheme. The results also indicate that the employment of light field imaging significant strengthens the robustness of the cipher image against some conventional image processing attacks.

  10. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  11. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. (paper)

  12. EDGE BASED 3D INDOOR CORRIDOR MODELING USING A SINGLE IMAGE

    A. Baligh Jahromi

    2015-08-01

    Full Text Available Reconstruction of spatial layout of indoor scenes from a single image is inherently an ambiguous problem. However, indoor scenes are usually comprised of orthogonal planes. The regularity of planar configuration (scene layout is often recognizable, which provides valuable information for understanding the indoor scenes. Most of the current methods define the scene layout as a single cubic primitive. This domain-specific knowledge is often not valid in many indoors where multiple corridors are linked each other. In this paper, we aim to address this problem by hypothesizing-verifying multiple cubic primitives representing the indoor scene layout. This method utilizes middle-level perceptual organization, and relies on finding the ground-wall and ceiling-wall boundaries using detected line segments and the orthogonal vanishing points. A comprehensive interpretation of these edge relations is often hindered due to shadows and occlusions. To handle this problem, the proposed method introduces virtual rays which aid in the creation of a physically valid cubic structure by using orthogonal vanishing points. The straight line segments are extracted from the single image and the orthogonal vanishing points are estimated by employing the RANSAC approach. Many scene layout hypotheses are created through intersecting random line segments and virtual rays of vanishing points. The created hypotheses are evaluated by a geometric reasoning-based objective function to find the best fitting hypothesis to the image. The best model hypothesis offered with the highest score is then converted to a 3D model. The proposed method is fully automatic and no human intervention is necessary to obtain an approximate 3D reconstruction.

  13. Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences

    Robert Niese

    2010-10-01

    Full Text Available Normal 0 21 false false false DE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In modern human computer interaction systems, emotion recognition from video is becoming an imperative feature. In this work we propose a new method for automatic recognition of facial expressions related to categories of basic emotions from image data. Our method incorporates a series of image processing, low level 3D computer vision and pattern recognition techniques. For image feature extraction, color and gradient information is used. Further, in terms of 3D processing, camera models are applied along with an initial registration step, in which person specific face models are automatically built from stereo. Based on these face models, geometric feature measures are computed and normalized using photogrammetric techniques. For recognition this normalization leads to minimal mixing between different emotion classes, which are determined with an artificial neural network classifier. Our framework achieves robust and superior classification results, also across a variety of head poses with resulting perspective foreshortening and changing face size. Results are presented for domestic and publicly available databases.

  14. FEA Based on 3D Micro-CT Images of Mesoporous Engineered Hydrogels

    L. Siad

    2015-12-01

    Full Text Available The objective of this computational study was to propose a rapid procedure in obtaining an estimation of elastic moduli of solid phases of porous natural-polymeric biomaterials used for bone tissue engineering. This procedure was based on the comparison of experimental results to finite element (FE responses of parallelepiped so-called representative volume elements (rev of the material at hand. To address this issue a series of quasi-static unconfined compression tests were designed and performed on three prepared cylindrical biopolymer samples. Subsequently, a computed tomography scan was performed on fabricated specimens and two 3D images were reconstructed. Various parallelepiped revs of different sizes and located at distinct places within both constructs were isolated and then analyzed under unconfined compressive loads using FE modelling. In this preliminary study, for the sake of simplicity, the dried biopolymer solid is assumed to be linear elastic.

  15. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  16. Automated 3D-Objectdocumentation on the Base of an Image Set

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  17. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  18. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  19. Highly-Automatic MI Based Multiple 2D/3D Image Registration Using Self-initialized Geodesic Feature Correspondences

    Zheng, Hongwei; Cleju, Ioan; Saupe, Dietmar

    2010-01-01

    Intensity based registration methods, such as the mutual information (MI), do not commonly consider the spatial geometric information and the initial correspondences are uncertainty. In this paper, we present a novel approach for achieving highly-automatic 2D/3D image registration integrating the advantages from both entropy MI and spatial geometric features correspondence methods. Inspired by the scale space theory, we project the surfaces on a 3D model to 2D normal image spaces provided tha...

  20. 3D Reconstruction of NMR Images

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  1. MRI Sequence Images Compression Method Based on Improved 3D SPIHT%基于改进3D SPIHT的MRI序列图像压缩方法

    蒋行国; 李丹; 陈真诚

    2013-01-01

    目的 研究一种有效的MRI序列图像压缩方法.方法 以2组不同数量、不同层厚的MRI序列图像为例,针对3D SPIHT算法运算复杂度,在对D型、L型表项重复判断的不足上,提出了一种改进的3DSPIHT方法;同时,根据MRI序列图像的相关性特点,提出了分组编/解码的方法,结合3D小波变换和应用改进的3D SPIHT方法,实现了MRI序列图像压缩.结果 分组结合改进3D SPIHT方法与2DSPIHT,3D SPIHT相比,能够得到较好重构图像,同时,峰值信噪比(PSNR)提高了1~8 dB左右.结论 在相同码率下,分组结合改进3D SPIHT的方法提高了PSNR和图像恢复质量,可以更好地解决大量MRI序列图像存储与传输问题.%Objective To propose an effective MRI sequence image compression method for solving the storage and transmission problem of large amounts of MRI sequence images. Methods Aimed at alleviating the complexity of computation of 3D Set Partitioning in Hierarchical Trees( SPIHT) algorithm and the deficiency that D or L type table were judged repeatedly, an improved 3 D SPIHT method was presented and two groups of MRI sequence images with different numbers and slice thickness were taken as examples. At the same time, according to the correlation characteristics of MRI sequence images, a method that images were divided into groups and then coded/decoded was put forward in this paper. It combined with 3D wavelet transform and the improved 3D SPIHT method, the MRI sequence image compression was achieved. Results Comparing with the 2D SPIHT and 3D SPIHT methods, the grouping combined with the improved 3D SPIHT method could obtain better reconstructed images and Peak Signal Noise Ratio (PSNR) could be improved by 1 ~ 8 dB as well. Conclusion At the same bit rate, PSNR and image quality of recovery can be improved by the grouping combined with the improved 3D SPIHT method and the storage and transmission problem of large amounts of MRI sequence images can be solved.

  2. REGION-BASED 3D SURFACE RECONSTRUCTION USING IMAGES ACQUIRED BY LOW-COST UNMANNED AERIAL SYSTEMS

    Z. Lari

    2015-08-01

    Full Text Available Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  3. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  4. Quantitative wound healing measurement and monitoring system based on an innovative 3D imaging system

    Yi, Steven; Yang, Arthur; Yin, Gongjie; Wen, James

    2011-03-01

    In this paper, we report a novel three-dimensional (3D) wound imaging system (hardware and software) under development at Technest Inc. System design is aimed to perform accurate 3D measurement and modeling of a wound and track its healing status over time. Accurate measurement and tracking of wound healing enables physicians to assess, document, improve, and individualize the treatment plan given to each wound patient. In current wound care practices, physicians often visually inspect or roughly measure the wound to evaluate the healing status. This is not an optimal practice since human vision lacks precision and consistency. In addition, quantifying slow or subtle changes through perception is very difficult. As a result, an instrument that quantifies both skin color and geometric shape variations would be particularly useful in helping clinicians to assess healing status and judge the effect of hyperemia, hematoma, local inflammation, secondary infection, and tissue necrosis. Once fully developed, our 3D imaging system will have several unique advantages over traditional methods for monitoring wound care: (a) Non-contact measurement; (b) Fast and easy to use; (c) up to 50 micron measurement accuracy; (d) 2D/3D Quantitative measurements;(e) A handheld device; and (f) Reasonable cost (< $1,000).

  5. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automati...

  6. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms. PMID:26186775

  7. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Xing Zhao

    2009-01-01

    Full Text Available Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed in this paper. This method divides both projection data and reconstructed image volume into subsets according to geometric symmetries in circular cone-beam projection layout, and a fast reconstruction for large data volume can be implemented by packing the subsets of projection data into the RGBA channels of GPU, performing the reconstruction chunk by chunk and combining the individual results in the end. The method is evaluated by reconstructing 3D images from computer-simulation data and real micro-CT data. Our results indicate that the GPU implementation can maintain original precision and speed up the reconstruction process by 110–120 times for circular cone-beam scan, as compared to traditional CPU implementation.

  8. Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine.

    Lee, S; Lee, J; Lee, A; Park, N; Lee, S; Song, S; Seo, A; Lee, H; Kim, J-I; Eom, K

    2013-05-01

    Augmented reality (AR) is a technology which enables users to see the real world, with virtual objects superimposed upon or composited with it. AR simulators have been developed and used in human medicine, but not in veterinary medicine. The aim of this study was to develop an AR intravenous (IV) injection simulator to train veterinary and pre-veterinary students to perform canine venipuncture. Computed tomographic (CT) images of a beagle dog were scanned using a 64-channel multidetector. The CT images were transformed into volumetric data sets using an image segmentation method and were converted into a stereolithography format for creating 3D models. An AR-based interface was developed for an AR simulator for IV injection. Veterinary and pre-veterinary student volunteers were randomly assigned to an AR-trained group or a control group trained using more traditional methods (n = 20/group; n = 8 pre-veterinary students and n = 12 veterinary students in each group) and their proficiency at IV injection technique in live dogs was assessed after training was completed. Students were also asked to complete a questionnaire which was administered after using the simulator. The group that was trained using an AR simulator were more proficient at IV injection technique using real dogs than the control group (P ≤ 0.01). The students agreed that they learned the IV injection technique through the AR simulator. Although the system used in this study needs to be modified before it can be adopted for veterinary educational use, AR simulation has been shown to be a very effective tool for training medical personnel. Using the technology reported here, veterinary AR simulators could be developed for future use in veterinary education. PMID:23103217

  9. Computer-aided interactive surgical simulation for craniofacial anomalies based on 3-D surface reconstruction CT images

    We developed a computer-aided interactive surgical simulation system for craniofacial anomalies based on three-dimensional (3-D) surface reconstruction CT imaging. This system has four functions: 1) 3-D surface reconstruction display with an accelerated projection method; 2) Surgical simulation to cut, move, rotate, and reverse bone-blocks over the reference 3-D image on the CRT screen; 3) 3-D display of the simulated image in arbitrary views; and 4) Prediction of postoperative skin surface features displayed as 3-D images in arbitrary views. Retrospective surgical simulation has been performed on three patients who underwent the fronto-orbital advancement procedures for brachycephaly and two who underwent the reconstructive procedure for scaphocephaly. The predicted configurations of the cranium and skin surface were well simulated when compared to the postoperative images in 3-D arbitrary views. In practical use, this software might be used for an on-line system connected to a large scale general-purpose computer. (author)

  10. Advanced 3-D Ultrasound Imaging

    Rasmussen, Morten Fischer

    been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... and removes the need to integrate custom made electronics into the probe. A downside of row-column addressing 2-D arrays is the creation of secondary temporal lobes, or ghost echoes, in the point spread function. In the second part of the scientific contributions, row-column addressing of 2-D arrays...... was investigated. An analysis of how the ghost echoes can be attenuated was presented.Attenuating the ghost echoes were shown to be achieved by minimizing the first derivative of the apodization function. In the literature, a circular symmetric apodization function was proposed. A new apodization layout...

  11. 3D medical image segmentation based on a continuous modelling of the volume

    Several medical imaging/techniques, including Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) provide 3D information of the human body by means of a stack of parallel cross-sectional images. But a more sophisticated edge detection step has to be performed when the object under study is not well defined by its characteristic density or when an analytical knowledge of the surface of the object is useful for later processings. A new method for medical image segmentation has been developed: it uses the stability and differentiability properties of a continuous modelling of the 3D data. The idea is to build a system of Ordinary Differential Equations which the stable manifold is the surface of the object we are looking for. This technique has been applied to classical edge detection operators: threshold following, laplacian, gradient maximum in its direction. It can be used in 2D as well as in 3D and has been extended to seek particular points of the surface, such as local extrema. The major advantages of this method are as follows: the segmentation and boundary following steps are performed simultaneously, an analytical representation of the surface is obtained straightforwardly and complex objects in which branching problems may occur can be described automatically. Simulations on noisy synthetic images have induced a quantization step to test the sensitiveness to noise of our method with respect to each operator, and to study the influence of all the parameters. Last, this method has been applied to numerous real clinical exams: skull or femur images provided by CT, MR images of a cerebral tumor and of the ventricular system. These results show the reliability and the efficiency of this new method of segmentation

  12. 3D resolution enhancement of deep-tissue imaging based on virtual spatial overlap modulation microscopy.

    Su, I-Cheng; Hsu, Kuo-Jen; Shen, Po-Ting; Lin, Yen-Yin; Chu, Shi-Wei

    2016-07-25

    During the last decades, several resolution enhancement methods for optical microscopy beyond diffraction limit have been developed. Nevertheless, those hardware-based techniques typically require strong illumination, and fail to improve resolution in deep tissue. Here we develop a high-speed computational approach, three-dimensional virtual spatial overlap modulation microscopy (3D-vSPOM), which immediately solves the strong-illumination issue. By amplifying only the spatial frequency component corresponding to the un-scattered point-spread-function at focus, plus 3D nonlinear value selection, 3D-vSPOM shows significant resolution enhancement in deep tissue. Since no iteration is required, 3D-vSPOM is much faster than iterative deconvolution. Compared to non-iterative deconvolution, 3D-vSPOM does not need a priori information of point-spread-function at deep tissue, and provides much better resolution enhancement plus greatly improved noise-immune response. This method is ready to be amalgamated with two-photon microscopy or other laser scanning microscopy to enhance deep-tissue resolution. PMID:27464077

  13. A joint multi-view plus depth image coding scheme based on 3D-warping

    Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone;

    2011-01-01

    scene structure that can be effectively exploited to improve the performance of multi-view coding schemes. In this paper we introduce a novel coding architecture that replaces the inter-view motion prediction operation with a 3D warping approach based on depth information to improve the coding...

  14. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  15. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    Highlights: ► We revised the DBSCAN algorithm for segmentation and clustering of large 3D image dataset and classified multivariate image. ► The algorithm takes into account the coordinate system of the image data to improve the computational performance. ► The algorithm solved the instability problem in boundaries detection of the original DBSCAN. ► The segmentation results were successfully validated with synthetic 3D image and 3D XMT image of a pharmaceutical powder. - Abstract: Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image.

  16. A density-based segmentation for 3D images, an application for X-ray micro-tomography

    Tran, Thanh N., E-mail: thanh.tran@merck.com [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Nguyen, Thanh T.; Willemsz, Tofan A. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Pharmaceutical Sciences and Clinical Supplies, Merck MSD, PO Box 20, 5340 BH Oss (Netherlands); Kessel, Gijs van [Center for Mathematical Sciences Merck, MSD Molenstraat 110, 5342 CC Oss, PO Box 20, 5340 BH Oss (Netherlands); Frijlink, Henderik W. [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Voort Maarschalk, Kees van der [Department of Pharmaceutical Technology and Biopharmacy, University of Groningen, Groningen (Netherlands); Competence Center Process Technology, Purac Biochem, Gorinchem (Netherlands)

    2012-05-06

    Highlights: Black-Right-Pointing-Pointer We revised the DBSCAN algorithm for segmentation and clustering of large 3D image dataset and classified multivariate image. Black-Right-Pointing-Pointer The algorithm takes into account the coordinate system of the image data to improve the computational performance. Black-Right-Pointing-Pointer The algorithm solved the instability problem in boundaries detection of the original DBSCAN. Black-Right-Pointing-Pointer The segmentation results were successfully validated with synthetic 3D image and 3D XMT image of a pharmaceutical powder. - Abstract: Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image.

  17. MMW and THz images denoising based on adaptive CBM3D

    Dai, Li; Zhang, Yousai; Li, Yuanjiang; Wang, Haoxiang

    2014-04-01

    Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.

  18. Crowdsourcing Based 3d Modeling

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  19. Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Bagci, Ulas; Chen, Xinjian

    2010-01-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate th...

  20. Study of CT-based positron range correction in high resolution 3D PET imaging

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  1. Study of CT-based positron range correction in high resolution 3D PET imaging

    Cal-Gonzalez, J., E-mail: jacobo@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  2. Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

    Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.

  3. Segmentation and Recognition of Highway Assets using Image-based 3D Point Clouds and Semantic Texton Forests

    Golparvar-Fard, Mani; Balali, Vahid; de la Garza, Jesus M.

    2013-01-01

    This dataset was collected as part of research work on segmentation and recognition of highway assets in images and viedo. The research is described in detail in Journal of Computing in Civil Engineering - ASCE paper "Segmentation and Recognition of Highway Assets using Image-based 3D Point Clouds and Semantic Texton Forests". The dataset include: 12 highway asset catgegories, 3 different dataset which is divided in three groups: (a)Ground Truth images with #_#_s_GT.jpg filename...

  4. Image encryption schemes for JPEG and GIF formats based on 3D baker with compound chaotic sequence generator

    Ji, Shiyu; Tong, Xiaojun; Zhang, Miao

    2012-01-01

    This paper proposed several methods to transplant the compound chaotic image encryption scheme with permutation based on 3D baker into image formats as Joint Photographic Experts Group (JPEG) and Graphics Interchange Format (GIF). The new method averts the lossy Discrete Cosine Transform and quantization and can encrypt and decrypt JPEG images lossless. Our proposed method for GIF keeps the property of animation successfully. The security test results indicate the proposed methods have high s...

  5. Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging

    Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [18F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. In this paper they describe this algorithm and present scatter correction results from human and chest phantom studies

  6. Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes

    Niclass, Cristiano; Rochas, Alexis; Besse, Pierre-André; Charbon, Edoardo

    2005-01-01

    The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 × 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can b...

  7. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  8. A LabVIEW based user-friendly nano-CT image alignment and 3D reconstruction platform

    Wang, Shenghao; Wang, Zhili; Gao, Kun; Wu, Zhao; Zhu, Peiping; Wu, Ziyu

    2014-01-01

    X-ray nanometer computed tomography (nano-CT) offers applications and opportunities in many scientific researches and industrial areas. Here we present a user-friendly and fast LabVIEW based package running, after acquisition of the raw projection images, a procedure to obtain the inner structure of the sample under analysis. At first, a reliable image alignment procedure fixes possible misalignments among image series due to mechanical errors, thermal expansion and other external contributions, then a novel fast parallel beam 3D reconstruction performs the tomographic reconstruction. The remarkable improved reconstruction after the image calibration confirms the fundamental role of the image alignment procedure. It minimizes blurring and additional streaking artifacts present in a reconstructed slice that cause loss of information and faked structures in the observed material. The nano-CT image alignment and 3D reconstruction LabVIEW package significantly reducing the data process, makes faster and easier th...

  9. Development and Implementation of a Web-Enabled 3D Consultation Tool for Breast Augmentation Surgery Based on 3D-Image Reconstruction of 2D Pictures

    de Heras Ciechomski, Pablo; Constantinescu, Mihai; Garcia, Jaime; Olariu, Radu; Dindoyal, Irving; Le Huu, Serge; Reyes, Mauricio

    2012-01-01

    Background Producing a rich, personalized Web-based consultation tool for plastic surgeons and patients is challenging. Objective (1) To develop a computer tool that allows individual reconstruction and simulation of 3-dimensional (3D) soft tissue from ordinary digital photos of breasts, (2) to implement a Web-based, worldwide-accessible preoperative surgical planning platform for plastic surgeons, and (3) to validate this tool through a quality control analysis by comparing 3D laser scans of...

  10. Reconstruction of lava fields based on 3D and conventional images. Arenal volcano, Costa Rica.

    Horvath, S.; Duarte, E.; Fernandez, E.

    2007-05-01

    , chemical composition, type of lava, velocity, etc. With all this information and photographs; real, visual and topographic images of the position and characters of the 1990s and 2000s lava flows, were obtained . An illustrative poster will be presented along with this abstract to show the construction process of such tool. Moreover, 3D animations will be present in the mentioned poster.

  11. Web-based interactive visualization of 3D video mosaics using X3D standard

    CHON Jaechoon; LEE Yang-Won; SHIBASAKI Ryosuke

    2006-01-01

    We present a method of 3D image mosaicing for real 3D representation of roadside buildings, and implement a Web-based interactive visualization environment for the 3D video mosaics created by 3D image mosaicing. The 3D image mosaicing technique developed in our previous work is a very powerful method for creating textured 3D-GIS data without excessive data processing like the laser or stereo system. For the Web-based open access to the 3D video mosaics, we build an interactive visualization environment using X3D, the emerging standard of Web 3D. We conduct the data preprocessing for 3D video mosaics and the X3D modeling for textured 3D data. The data preprocessing includes the conversion of each frame of 3D video mosaics into concatenated image files that can be hyperlinked on the Web. The X3D modeling handles the representation of concatenated images using necessary X3D nodes. By employing X3D as the data format for 3D image mosaics, the real 3D representation of roadside buildings is extended to the Web and mobile service systems.

  12. Ball-scale based hierarchical multi-object recognition in 3D medical images

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  13. Acquisition and applications of 3D images

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  14. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  15. ICER-3D Hyperspectral Image Compression Software

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  16. Precise Depth Image Based Real-Time 3D Difference Detection

    Kahn, Svenja

    2014-01-01

    3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the ...

  17. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  18. Real-time 3D millimeter wave imaging based FMCW using GGD focal plane array as detectors

    Levanon, Assaf; Rozban, Daniel; Kopeika, Natan S.; Yitzhaky, Yitzhak; Abramovich, Amir

    2014-03-01

    Millimeter wave (MMW) imaging systems are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is relatively low. The lack of inexpensive room temperature imaging systems makes it difficult to give a suitable MMW system for many of the above applications. 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with a Glow Discharge Detector (GDD) Focal Plane Array (FPA) of plasma based detectors. Each point on the object corresponds to a point in the image and includes the distance information. This will enable 3D MMW imaging. The radar system requires that the millimeter wave detector (GDD) will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the image. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of GDD devices. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  19. Heat Equation to 3D Image Segmentation

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  20. Cloud-Based Geospatial 3D Image Spaces—A Powerful Urban Model for the Smart City

    Stephan Nebiker

    2015-10-01

    Full Text Available In this paper, we introduce the concept and an implementation of geospatial 3D image spaces as new type of native urban models. 3D image spaces are based on collections of georeferenced RGB-D imagery. This imagery is typically acquired using multi-view stereo mobile mapping systems capturing dense sequences of street level imagery. Ideally, image depth information is derived using dense image matching. This delivers a very dense depth representation and ensures the spatial and temporal coherence of radiometric and depth data. This results in a high-definition WYSIWYG (“what you see is what you get” urban model, which is intuitive to interpret and easy to interact with, and which provides powerful augmentation and 3D measuring capabilities. Furthermore, we present a scalable cloud-based framework for generating 3D image spaces of entire cities or states and a client architecture for their web-based exploitation. The model and the framework strongly support the smart city notion of efficiently connecting the urban environment and its processes with experts and citizens alike. In the paper we particularly investigate quality aspects of the urban model, namely the obtainable georeferencing accuracy and the quality of the depth map extraction. We show that our image-based georeferencing approach is capable of improving the original direct georeferencing accuracy by an order of magnitude and that the presented new multi-image matching approach is capable of providing high accuracies along with a significantly improved completeness of the depth maps.

  1. Refraction-based 2D, 2.5D and 3D medical imaging: Stepping forward to a clinical trial

    Ando, Masami [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan)], E-mail: msm-ando@rs.noda.tus.ac.jp; Bando, Hiroko [Tsukuba University (Japan); Tokiko, Endo; Ichihara, Shu [Nagoya Medical Center (Japan); Hashimoto, Eiko [GUAS (Japan); Hyodo, Kazuyuki [KEK (Japan); Kunisada, Toshiyuki [Okayama University (Japan); Li Gang [BSRF (China); Maksimenko, Anton [Tokyo University of Science, Research Institute for Science and Technology, Noda, Chiba 278-8510 (Japan); KEK (Japan); Mori, Kensaku [Nagoya University (Japan); Shimao, Daisuke [IPU (Japan); Sugiyama, Hiroshi [KEK (Japan); Yuasa, Tetsuya [Yamagata University (Japan); Ueno, Ei [Tsukuba University (Japan)

    2008-12-15

    An attempt at refraction-based 2D, 2.5D and 3D X-ray imaging of articular cartilage and breast carcinoma is reported. We are developing very high contrast X-ray 2D imaging with XDFI (X-ray dark-field imaging), X-ray CT whose data are acquired by DEI (diffraction-enhanced imaging) and tomosynthesis due to refraction contrast. 2D and 2.5D images were taken with nuclear plates or with X-ray films. Microcalcification of breast cancer and articular cartilage are clearly visible. 3D data were taken with an X-ray sensitive CCD camera. The 3D image was successfully reconstructed by the use of an algorithm newly made by our group. This shows a distinctive internal structure of a ductus lactiferi (milk duct) that contains inner wall, intraductal carcinoma and multifocal calcification in the necrotic core of the continuous DCIS (ductal carcinoma in situ). Furthermore consideration of clinical applications of these contrasts made us to try tomosynthesis. This attempt was satisfactory from the view point of articular cartilage image quality and the skin radiation dose.

  2. Refraction-based 2D, 2.5D and 3D medical imaging: Stepping forward to a clinical trial

    An attempt at refraction-based 2D, 2.5D and 3D X-ray imaging of articular cartilage and breast carcinoma is reported. We are developing very high contrast X-ray 2D imaging with XDFI (X-ray dark-field imaging), X-ray CT whose data are acquired by DEI (diffraction-enhanced imaging) and tomosynthesis due to refraction contrast. 2D and 2.5D images were taken with nuclear plates or with X-ray films. Microcalcification of breast cancer and articular cartilage are clearly visible. 3D data were taken with an X-ray sensitive CCD camera. The 3D image was successfully reconstructed by the use of an algorithm newly made by our group. This shows a distinctive internal structure of a ductus lactiferi (milk duct) that contains inner wall, intraductal carcinoma and multifocal calcification in the necrotic core of the continuous DCIS (ductal carcinoma in situ). Furthermore consideration of clinical applications of these contrasts made us to try tomosynthesis. This attempt was satisfactory from the view point of articular cartilage image quality and the skin radiation dose

  3. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  4. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  5. Dixon imaging-based partial volume correction improves quantification of choline detected by breast 3D-MRSI

    Minarikova, Lenka; Gruber, Stephan; Bogner, Wolfgang; Trattnig, Siegfried; Chmelik, Marek [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, MR Center of Excellence, Vienna (Austria); Pinker-Domenig, Katja; Baltzer, Pascal A.T.; Helbich, Thomas H. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria)

    2014-09-14

    Our aim was to develop a partial volume (PV) correction method of choline (Cho) signals detected by breast 3D-magnetic resonance spectroscopic imaging (3D-MRSI), using information from water/fat-Dixon MRI. Following institutional review board approval, five breast cancer patients were measured at 3 T. 3D-MRSI (1 cm{sup 3} resolution, duration ∝11 min) and Dixon MRI (1 mm{sup 3}, ∝2 min) were measured in vivo and in phantoms. Glandular/lesion tissue was segmented from water/fat-Dixon MRI and transformed to match the resolution of 3D-MRSI. The resulting PV values were used to correct Cho signals. Our method was validated on a two-compartment phantom (choline/water and oil). PV values were correlated with the spectroscopic water signal. Cho signal variability, caused by partial-water/fat content, was tested in 3D-MRSI voxels located in/near malignant lesions. Phantom measurements showed good correlation (r = 0.99) with quantified 3D-MRSI water signals, and better homogeneity after correction. The dependence of the quantified Cho signal on the water/fat voxel composition was significantly (p < 0.05) reduced using Dixon MRI-based PV correction, compared to the original uncorrected data (1.60-fold to 3.12-fold) in patients. The proposed method allows quantification of the Cho signal in glandular/lesion tissue independent of water/fat composition in breast 3D-MRSI. This can improve the reproducibility of breast 3D-MRSI, particularly important for therapy monitoring. (orig.)

  6. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT-based

  7. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  8. 3-D SAR image formation from sparse aperture data using 3-D target grids

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  9. Automated segmentation method for the 3D ultrasound carotid image based on geometrically deformable model with automatic merge function

    Li, Xiang; Wang, Zigang; Lu, Hongbing; Liang, Zhengrong

    2002-05-01

    Stenosis of the carotid is the most common cause of the stroke. The accurate measurement of the volume of the carotid and visualization of its shape are helpful in improving diagnosis and minimizing the variability of assessment of the carotid disease. Due to the complex anatomic structure of the carotid, it is mandatory to define the initial contours in every slice, which is very difficult and usually requires tedious manual operations. The purpose of this paper is to propose an automatic segmentation method, which automatically provides the contour of the carotid from the 3-D ultrasound image and requires minimum user interaction. In this paper, we developed the Geometrically Deformable Model (GDM) with automatic merge function. In our algorithm, only two initial contours in the topmost slice and four parameters are needed in advance. Simulated 3-D ultrasound image was used to test our algorithm. 3-D display of the carotid obtained by our algorithm showed almost identical shape with true 3-D carotid image. In addition, experimental results also demonstrated that error of the volume measurement of the carotid based on the three different initial contours is less that 1% and its speed was a very fast.

  10. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  11. Metrological characterization of 3D imaging devices

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  12. Design and Characterization of a Current Assisted Photo Mixing Demodulator for Tof Based 3d Cmos Image Sensor

    Hossain, Quazi Delwar

    2010-01-01

    Due to the increasing demand for 3D vision systems, many efforts have been recently concentrated to achieve complete 3D information analogous to human eyes. Scannerless optical range imaging systems are emerging as an interesting alternative to conventional intensity imaging in a variety of applications, including pedestrian security, biomedical appliances, robotics and industrial control etc. For this, several studies have reported to produce 3D images including stereovision, object distance...

  13. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-03-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up.

  14. Image-Based 3D Modeling as a Documentation Method for Zooarchaeological Remains in Waste-Related Contexts

    Stella Macheridis

    2015-01-01

    During the last twenty years archaeology has experienced a technological revolution that spans scientific achievements and day-to-day practices. The tools and methods from this digital change have also strongly impacted archaeology. Image-based 3D modeling is becoming more common when documenting archaeological features but is still not implemented as standard in field excavation projects. When it comes to integrating zooarchaeological perspectives in the interpretational process in the field...

  15. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  16. A density-based segmentation for 3D images, an application for X-ray micro-tomography.

    Tran, Thanh N; Nguyen, Thanh T; Willemsz, Tofan A; van Kessel, Gijs; Frijlink, Henderik W; van der Voort Maarschalk, Kees

    2012-05-01

    Density-based spatial clustering of applications with noise (DBSCAN) is an unsupervised classification algorithm which has been widely used in many areas with its simplicity and its ability to deal with hidden clusters of different sizes and shapes and with noise. However, the computational issue of the distance table and the non-stability in detecting the boundaries of adjacent clusters limit the application of the original algorithm to large datasets such as images. In this paper, the DBSCAN algorithm was revised and improved for image clustering and segmentation. The proposed clustering algorithm presents two major advantages over the original one. Firstly, the revised DBSCAN algorithm made it applicable for large 3D image dataset (often with millions of pixels) by using the coordinate system of the image data. Secondly, the revised algorithm solved the non-stability issue of boundary detection in the original DBSCAN. For broader applications, the image dataset can be ordinary 3D images or in general, it can also be a classification result of other type of image data e.g. a multivariate image. PMID:22502607

  17. 3D near-infrared imaging based on a single-photon avalanche diode array sensor

    Mata Pavia, J.; Charbon, E.; Wolf, M.

    2011-01-01

    An imager for optical tomography was designed based on a detector with 128x128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in w

  18. A parallelized surface extraction algorithm for large binary image data sets based on an adaptive 3D delaunay subdivision strategy.

    Ma, Yingliang; Saetzler, Kurt

    2008-01-01

    In this paper we describe a novel 3D subdivision strategy to extract the surface of binary image data. This iterative approach generates a series of surface meshes that capture different levels of detail of the underlying structure. At the highest level of detail, the resulting surface mesh generated by our approach uses only about 10% of the triangles in comparison to the marching cube algorithm (MC) even in settings were almost no image noise is present. Our approach also eliminates the so-called "staircase effect" which voxel based algorithms like the MC are likely to show, particularly if non-uniformly sampled images are processed. Finally, we show how the presented algorithm can be parallelized by subdividing 3D image space into rectilinear blocks of subimages. As the algorithm scales very well with an increasing number of processors in a multi-threaded setting, this approach is suited to process large image data sets of several gigabytes. Although the presented work is still computationally more expensive than simple voxel-based algorithms, it produces fewer surface triangles while capturing the same level of detail, is more robust towards image noise and eliminates the above-mentioned "staircase" effect in anisotropic settings. These properties make it particularly useful for biomedical applications, where these conditions are often encountered. PMID:17993710

  19. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    Wang, Sheng-Hao; Zhang, Kai; Wang, Zhi-Li; Gao, Kun; Wu, Zhao; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly, and fast software package based on LabVIEW that may allow us to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors, thermal expansion, and other external factors has been considered, together with a novel fast parallel beam 3D reconstruction procedure that was developed ad hoc to perform the tomographic reconstruction. We have obtained remarkably improved reconstruction results at the Beijing Synchrotron Radiation Facility after the image calibration, the fundamental role of this image alignment procedure was confirmed, which minimizes the unwanted blurs and additional streaking artifacts that are always present in reconstructed slices. Moreover, this nano-CT image alignment and its associated 3D reconstruction procedure are fully based on LabVIEW routines, significantly reducing the data post-processing cycle, thus making the activity of the users faster and easier during experimental runs.

  20. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification

  1. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research. PMID:26756406

  2. A new navigation approach of terrain contour matching based on 3-D terrain reconstruction from onboard image sequence

    2010-01-01

    This article presents a passive navigation method of terrain contour matching by reconstructing the 3-D terrain from the image sequence(acquired by the onboard camera).To achieve automation and simultaneity of the image sequence processing for navigation,a correspondence registration method based on control points tracking is proposed which tracks the sparse control points through the whole image sequence and uses them as correspondence in the relation geometry solution.Besides,a key frame selection method based on the images overlapping ratio and intersecting angles is explored,thereafter the requirement for the camera system configuration is provided.The proposed method also includes an optimal local homography estimating algorithm according to the control points,which helps correctly predict points to be matched and their speed corresponding.Consequently,the real-time 3-D terrain of the trajectory thus reconstructed is matched with the referenced terrain map,and the result of which provides navigating information.The digital simulation experiment and the real image based experiment have verified the proposed method.

  3. 2D-3D registration for prostate radiation therapy based on a statistical model of transmission images

    Purpose: In external beam radiation therapy of pelvic sites, patient setup errors can be quantified by registering 2D projection radiographs acquired during treatment to a 3D planning computed tomograph (CT). We present a 2D-3D registration framework based on a statistical model of the intensity values in the two imaging modalities. Methods: The model assumes that intensity values in projection radiographs are independently but not identically distributed due to the nonstationary nature of photon counting noise. Two probability distributions are considered for the intensity values: Poisson and Gaussian. Using maximum likelihood estimation, two similarity measures, maximum likelihood with a Poisson (MLP) and maximum likelihood with Gaussian (MLG), distribution are derived. Further, we investigate the merit of the model-based registration approach for data obtained with current imaging equipment and doses by comparing the performance of the similarity measures derived to that of the Pearson correlation coefficient (ICC) on accurately collected data of an anthropomorphic phantom of the pelvis and on patient data. Results: Registration accuracy was similar for all three similarity measures and surpassed current clinical requirements of 3 mm for pelvic sites. For pose determination experiments with a kilovoltage (kV) cone-beam CT (CBCT) and kV projection radiographs of the phantom in the anterior-posterior (AP) view, registration accuracies were 0.42 mm (MLP), 0.29 mm (MLG), and 0.29 mm (ICC). For kV CBCT and megavoltage (MV) AP portal images of the same phantom, registration accuracies were 1.15 mm (MLP), 0.90 mm (MLG), and 0.69 mm (ICC). Registration of a kV CT and MV AP portal images of a patient was successful in all instances. Conclusions: The results indicate that high registration accuracy is achievable with multiple methods including methods that are based on a statistical model of a 3D CT and 2D projection images.

  4. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K. [Univ. of Nebraska Medical Center, Omaha, NE (United States)

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  5. Hyper-hemispheric lens distortion model for 3D-imaging SPAD-array-based applications

    Pernechele, Claudio; Villa, Federica A.

    2015-09-01

    Panoramic omnidirectional lenses have the typical draw-back effect to obscure the frontal view, producing the classic "donut-shape" image in the focal plane. We realized a panoramic lens in which the frontal field is make available to be imaged in the focal plane together with the panoramic field, producing a FoV of 360° in azimuth and 270° in elevation; it have then the capabilities of a fish eye plus those of a panoramic lens: we call it hyper-hemispheric lens. We built and test an all-spherical hyper-hemispheric lens. The all-spherical configuration suffer for the typical issues of all ultra wide angle lenses: there is a large distortion at high view angles. The fundamental origin of the optical problems resides on the fact that chief rays angles on the object side are not preserved passing through the optics preceding the aperture stop (fore-optics). This effect produce an image distortion on the focal plane, with the focal length changing along the elevation angles. Moreover, the entrance pupil is shifting at large angle, where the paraxial approximation is not more valid, and tracing the rays appropriately require some effort to the optical designer. It has to be noted here as the distortion is not a source-point-aberrations: it is present also in well corrected optical lenses. Image distortion may be partially corrected using aspheric surface. We describe here how we correct it for our original hyper-hemispheric lens by designing an aspheric surface within the optical train and optimized for a Single Photon Avalanche Diode (SPAD) array-based imaging applications.

  6. 3D Assessment of Mandibular Growth Based on Image Registration: A Feasibility Study in a Rabbit Model

    I. Kim

    2014-01-01

    Full Text Available Background. Our knowledge of mandibular growth mostly derives from cephalometric radiography, which has inherent limitations due to the two-dimensional (2D nature of measurement. Objective. To assess 3D morphological changes occurring during growth in a rabbit mandible. Methods. Serial cone-beam computerised tomographic (CBCT images were made of two New Zealand white rabbits, at baseline and eight weeks after surgical implantation of 1 mm diameter metallic spheres as fiducial markers. A third animal acted as an unoperated (no implant control. CBCT images were segmented and registered in 3D (Implant Superimposition and Procrustes Method, and the remodelling pattern described used color maps. Registration accuracy was quantified by the maximal of the mean minimum distances and by the Hausdorff distance. Results. The mean error for image registration was 0.37 mm and never exceeded 1 mm. The implant-based superimposition showed most remodelling occurred at the mandibular ramus, with bone apposition posteriorly and vertical growth at the condyle. Conclusion. We propose a method to quantitatively describe bone remodelling in three dimensions, based on the use of bone implants as fiducial markers and CBCT as imaging modality. The method is feasible and represents a promising approach for experimental studies by comparing baseline growth patterns and testing the effects of growth-modification treatments.

  7. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  8. 3D Model Assisted Image Segmentation

    Jayawardena, Srimal; Hutter, Marcus

    2012-01-01

    The problem of segmenting a given image into coherent regions is important in Computer Vision and many industrial applications require segmenting a known object into its components. Examples include identifying individual parts of a component for process control work in a manufacturing plant and identifying parts of a car from a photo for automatic damage detection. Unfortunately most of an object's parts of interest in such applications share the same pixel characteristics, having similar colour and texture. This makes segmenting the object into its components a non-trivial task for conventional image segmentation algorithms. In this paper, we propose a "Model Assisted Segmentation" method to tackle this problem. A 3D model of the object is registered over the given image by optimising a novel gradient based loss function. This registration obtains the full 3D pose from an image of the object. The image can have an arbitrary view of the object and is not limited to a particular set of views. The segmentation...

  9. Image quality assessment of LaBr3-based whole-body 3D PET scanners: a Monte Carlo evaluation

    The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr3 detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr3 has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr3 without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr3 are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr3 scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr3 scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels

  10. Curve-based 2D-3D registration of coronary vessels for image guided procedure

    Duong, Luc; Liao, Rui; Sundar, Hari; Tailhades, Benoit; Meyer, Andreas; Xu, Chenyang

    2009-02-01

    3D roadmap provided by pre-operative volumetric data that is aligned with fluoroscopy helps visualization and navigation in Interventional Cardiology (IC), especially when contrast agent-injection used to highlight coronary vessels cannot be systematically used during the whole procedure, or when there is low visibility in fluoroscopy for partially or totally occluded vessels. The main contribution of this work is to register pre-operative volumetric data with intraoperative fluoroscopy for specific vessel(s) occurring during the procedure, even without contrast agent injection, to provide a useful 3D roadmap. In addition, this study incorporates automatic ECG gating for cardiac motion. Respiratory motion is identified by rigid body registration of the vessels. The coronary vessels are first segmented from a multislice computed tomography (MSCT) volume and correspondent vessel segments are identified on a single gated 2D fluoroscopic frame. Registration can be explicitly constrained using one or multiple branches of a contrast-enhanced vessel tree or the outline of guide wire used to navigate during the procedure. Finally, the alignment problem is solved by Iterative Closest Point (ICP) algorithm. To be computationally efficient, a distance transform is computed from the 2D identification of each vessel such that distance is zero on the centerline of the vessel and increases away from the centerline. Quantitative results were obtained by comparing the registration of random poses and a ground truth alignment for 5 datasets. We conclude that the proposed method is promising for accurate 2D-3D registration, even for difficult cases of occluded vessel without injection of contrast agent.

  11. Automatic histogram-based segmentation of white matter hyperintensities using 3D FLAIR images

    Simões, Rita; Slump, Cornelis; Moenninghoff, Christoph; Wanke, Isabel; Dlugaj, Martha; Weimar, Christian

    2012-03-01

    White matter hyperintensities are known to play a role in the cognitive decline experienced by patients suffering from neurological diseases. Therefore, accurately detecting and monitoring these lesions is of importance. Automatic methods for segmenting white matter lesions typically use multimodal MRI data. Furthermore, many methods use a training set to perform a classification task or to determine necessary parameters. In this work, we describe and evaluate an unsupervised segmentation method that is based solely on the histogram of FLAIR images. It approximates the histogram by a mixture of three Gaussians in order to find an appropriate threshold for white matter hyperintensities. We use a context-sensitive Expectation-Maximization method to determine the Gaussian mixture parameters. The segmentation is subsequently corrected for false positives using the knowledge of the location of typical FLAIR artifacts. A preliminary validation with the ground truth on 6 patients revealed a Similarity Index of 0.73 +/- 0.10, indicating that the method is comparable to others in the literature which require multimodal MRI and/or a preliminary training step.

  12. Multiplane 3D superresolution optical fluctuation imaging

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  13. 3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-11-10

    This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach. PMID:26560789

  14. Commissioning of a 3D image-based treatment planning system for high-dose-rate brachytherapy of cervical cancer.

    Kim, Yongbok; Modrick, Joseph M; Pennington, Edward C; Kim, Yusung

    2016-01-01

    The objective of this work is to present commissioning procedures to clinically implement a three-dimensional (3D), image-based, treatment-planning system (TPS) for high-dose-rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8-1.0 mm on MRI when compared with X-rays. In-house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose-volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image-based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End-to-end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image-based TPS for HDR

  15. Intersection-based registration of slice stacks to form 3D images of the human fetal brain

    Kim, Kio; Hansen, Mads Fogtmann; Habas, Piotr;

    2008-01-01

    Clinical fetal MR imaging of the brain commonly makes use of fast 2D acquisitions of multiple sets of approximately orthogonal 2D slices. We and others have previously proposed an iterative slice-to-volume registration process to recover a geometrically consistent 3D image. However, these...... approaches depend on a 3D volume reconstruction step during the slice alignment. This is both computationally expensive and makes the convergence of the registration process poorly defined. In this paper our key contribution is a new approach which considers the collective alignment of all slices directly......, via shared structure in their intersections, rather than to an estimated 3D volume. We derive an analytical expression for the gradient of the collective similarity of the slices along their intersections, with respect to the 3D location and orientation of each 2D slice. We include examples of the...

  16. Novel methodology for 3D reconstruction of carotid arteries and plaque characterization based upon magnetic resonance imaging carotid angiography data.

    Sakellarios, Antonis I; Stefanou, Kostas; Siogkas, Panagiotis; Tsakanikas, Vasilis D; Bourantas, Christos V; Athanasiou, Lambros; Exarchos, Themis P; Fotiou, Evangelos; Naka, Katerina K; Papafaklis, Michail I; Patterson, Andrew J; Young, Victoria E L; Gillard, Jonathan H; Michalis, Lampros K; Fotiadis, Dimitrios I

    2012-10-01

    In this study, we present a novel methodology that allows reliable segmentation of the magnetic resonance images (MRIs) for accurate fully automated three-dimensional (3D) reconstruction of the carotid arteries and semiautomated characterization of plaque type. Our approach uses active contours to detect the luminal borders in the time-of-flight images and the outer vessel wall borders in the T(1)-weighted images. The methodology incorporates the connecting components theory for the automated identification of the bifurcation region and a knowledge-based algorithm for the accurate characterization of the plaque components. The proposed segmentation method was validated in randomly selected MRI frames analyzed offline by two expert observers. The interobserver variability of the method for the lumen and outer vessel wall was -1.60%±6.70% and 0.56%±6.28%, respectively, while the Williams Index for all metrics was close to unity. The methodology implemented to identify the composition of the plaque was also validated in 591 images acquired from 24 patients. The obtained Cohen's k was 0.68 (0.60-0.76) for lipid plaques, while the time needed to process an MRI sequence for 3D reconstruction was only 30 s. The obtained results indicate that the proposed methodology allows reliable and automated detection of the luminal and vessel wall borders and fast and accurate characterization of plaque type in carotid MRI sequences. These features render the currently presented methodology a useful tool in the clinical and research arena. PMID:22617149

  17. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David;

    2007-01-01

    and finds the optimal hardware and light source setup before investing in expensive equipment and field experiments. It was expected to be a valuable tool to structure the otherwise incomprehensibly large information space and to see relationships between parameter configurations and crop features. Images...... for in the simulation. However, there were exceptions where there were structural differences between the virtual plant and the real plant that were unaccounted for by its category. The test framework was evaluated to be a valuable tool to uncover information from complex data structures....

  18. Intersection-based registration of slice stacks to form 3D images of the human fetal brain

    Kim, Kio; Hansen, Mads Fogtmann; Habas, Piotr; Rousseau, F.; Glen, O. A.; Barkovich, A. J.; Studholme, Colin

    2008-01-01

    Clinical fetal MR imaging of the brain commonly makes use of fast 2D acquisitions of multiple sets of approximately orthogonal 2D slices. We and others have previously proposed an iterative slice-to-volume registration process to recover a geometrically consistent 3D image. However, these...

  19. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  20. Miniaturized 3D microscope imaging system

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  1. 3D Buildings Extraction from Aerial Images

    Melnikova, O.; Prandi, F.

    2011-09-01

    This paper introduces a semi-automatic method for buildings extraction through multiple-view aerial image analysis. The advantage of the used semi-automatic approach is that it allows processing of each building individually finding the parameters of buildings features extraction more precisely for each area. On the early stage the presented technique uses an extraction of line segments that is done only inside of areas specified manually. The rooftop hypothesis is used further to determine a subset of quadrangles, which could form building roofs from a set of extracted lines and corners obtained on the previous stage. After collecting of all potential roof shapes in all images overlaps, the epipolar geometry is applied to find matching between images. This allows to make an accurate selection of building roofs removing false-positive ones and to identify their global 3D coordinates given camera internal parameters and coordinates. The last step of the image matching is based on geometrical constraints in contrast to traditional correlation. The correlation is applied only in some highly restricted areas in order to find coordinates more precisely, in such a way significantly reducing processing time of the algorithm. The algorithm has been tested on a set of Milan's aerial images and shows highly accurate results.

  2. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan. Advances and obstacles

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. (author)

  3. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan: advances and obstacles.

    Ohno, Tatsuya; Toita, Takafumi; Tsujino, Kayoko; Uchida, Nobue; Hatano, Kazuo; Nishimura, Tetsuo; Ishikura, Satoshi

    2015-11-01

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. PMID:26265660

  4. Full-field wing deformation measurement scheme for in-flight cantilever monoplane based on 3D digital image correlation

    Li, Lei-Gang; Liang, Jin; Guo, Xiang; Guo, Cheng; Hu, Hao; Tang, Zheng-Zong

    2014-06-01

    In this paper, a new non-contact scheme, based on 3D digital image correlation technology, is presented to measure the full-field wing deformation of in-flight cantilever monoplanes. Because of the special structure of the cantilever wing, two conjugated camera groups, which are rigidly connected and calibrated to an ensemble respectively, are installed onto the vertical fin of the aircraft and record the whole measurement. First, a type of pre-stretched target and speckle pattern are designed to adapt the oblique camera view for accurate detection and correlation. Then, because the measurement cameras are swinging with the aircraft vertical trail all the time, a camera position self-correction method (using control targets sprayed on the back of the aircraft), is designed to orientate all the cameras’ exterior parameters to a unified coordinate system in real time. Besides, for the excessively inclined camera axis and the vertical camera arrangement, a weak correlation between the high position image and low position image occurs. In this paper, a new dual-temporal efficient matching method, combining the principle of seed point spreading, is proposed to achieve the matching of weak correlated images. A novel system is developed and a simulation test in the laboratory was carried out to verify the proposed scheme.

  5. Full-field wing deformation measurement scheme for in-flight cantilever monoplane based on 3D digital image correlation

    In this paper, a new non-contact scheme, based on 3D digital image correlation technology, is presented to measure the full-field wing deformation of in-flight cantilever monoplanes. Because of the special structure of the cantilever wing, two conjugated camera groups, which are rigidly connected and calibrated to an ensemble respectively, are installed onto the vertical fin of the aircraft and record the whole measurement. First, a type of pre-stretched target and speckle pattern are designed to adapt the oblique camera view for accurate detection and correlation. Then, because the measurement cameras are swinging with the aircraft vertical trail all the time, a camera position self-correction method (using control targets sprayed on the back of the aircraft), is designed to orientate all the cameras’ exterior parameters to a unified coordinate system in real time. Besides, for the excessively inclined camera axis and the vertical camera arrangement, a weak correlation between the high position image and low position image occurs. In this paper, a new dual-temporal efficient matching method, combining the principle of seed point spreading, is proposed to achieve the matching of weak correlated images. A novel system is developed and a simulation test in the laboratory was carried out to verify the proposed scheme. (paper)

  6. Image-Based 3D Modeling as a Documentation Method for Zooarchaeological Remains in Waste-Related Contexts

    Stella Macheridis

    2015-12-01

    Full Text Available During the last twenty years archaeology has experienced a technological revolution that spans scientific achievements and day-to-day practices. The tools and methods from this digital change have also strongly impacted archaeology. Image-based 3D modeling is becoming more common when documenting archaeological features but is still not implemented as standard in field excavation projects. When it comes to integrating zooarchaeological perspectives in the interpretational process in the field, this type of documentation is a powerful tool, especially regarding visualization related to reconstruction and resolution. Also, with the implementation of image-based 3D modeling, the use of digital documentation in the field has been proven to be time- and cost effective (e.g., De Reu et al. 2014; De Reu et al. 2013; Dellepiane et al. 2013; Verhoeven et al. 2012. Few studies have been published on the digital documentation of faunal remains in archaeological contexts. As a case study, the excavation of the infill of a clay bin from building 102 in the Neolithic settlement of Ҫatalhöyük is presented. Alongside traditional documentation, infill was photographed in sequence at each second centimeter of soil removal. The photographs were processed with Agisoft Photoscan. Seven models were made, enabling reconstruction of the excavation of this context. This technique can be a powerful documentation tool, including recording notes of zooarchaeological significance, such as markers of taphonomic processes. An important methodological advantage in this regard is the potential to measure bones in situ in for analysis after excavation.

  7. An object-based approach to image/video-based synthesis and processing for 3-D and multiview televisions

    Chan, SC; Ng, KT; Ho, KL; Gan, ZF; Shum, HY

    2009-01-01

    This paper proposes an object-based approach to a class of dynamic image-based representations called "plenoptic videos," where the plenoptic video sequences are segmented into image-based rendering (IBR) objects each with its image sequence, depth map, and other relevant information such as shape and alpha information. This allows desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects to be supported. Moreover, the rendering...

  8. 3D nanoscale imaging of biological samples with laboratory-based soft X-ray sources

    Dehlinger, Aurélie; Blechschmidt, Anne; Grötzsch, Daniel; Jung, Robert; Kanngießer, Birgit; Seim, Christian; Stiel, Holger

    2015-09-01

    In microscopy, where the theoretical resolution limit depends on the wavelength of the probing light, radiation in the soft X-ray regime can be used to analyze samples that cannot be resolved with visible light microscopes. In the case of soft X-ray microscopy in the water-window, the energy range of the radiation lies between the absorption edges of carbon (at 284 eV, 4.36 nm) and oxygen (543 eV, 2.34 nm). As a result, carbon-based structures, such as biological samples, posses a strong absorption, whereas e.g. water is more transparent to this radiation. Microscopy in the water-window, therefore, allows the structural investigation of aqueous samples with resolutions of a few tens of nanometers and a penetration depth of up to 10μm. The development of highly brilliant laser-produced plasma-sources has enabled the transfer of Xray microscopy, that was formerly bound to synchrotron sources, to the laboratory, which opens the access of this method to a broader scientific community. The Laboratory Transmission X-ray Microscope at the Berlin Laboratory for innovative X-ray technologies (BLiX) runs with a laser produced nitrogen plasma that emits radiation in the soft X-ray regime. The mentioned high penetration depth can be exploited to analyze biological samples in their natural state and with several projection angles. The obtained tomogram is the key to a more precise and global analysis of samples originating from various fields of life science.

  9. Significant acceleration of 2D-3D registration-based fusion of ultrasound and x-ray images by mesh-based DRR rendering

    Kaiser, Markus; John, Matthias; Borsdorf, Anja; Mountney, Peter; Ionasec, Razvan; Nöttling, Alois; Kiefer, Philipp; Seeburger, Jörg; Neumuth, Thomas

    2013-03-01

    For transcatheter-based minimally invasive procedures in structural heart disease ultrasound and X-ray are the two enabling imaging modalities. A live fusion of both real-time modalities can potentially improve the workflow and the catheter navigation by combining the excellent instrument imaging of X-ray with the high-quality soft tissue imaging of ultrasound. A recently published approach to fuse X-ray fluoroscopy with trans-esophageal echo (TEE) registers the ultrasound probe to X-ray images by a 2D-3D registration method which inherently provides a registration of ultrasound images to X-ray images. In this paper, we significantly accelerate the 2D-3D registration method in this context. The main novelty is to generate the projection images (DRR) of the 3D object not via volume ray-casting but instead via a fast rendering of triangular meshes. This is possible, because in the setting for TEE/X-ray fusion the 3D geometry of the ultrasound probe is known in advance and their main components can be described by triangular meshes. We show that the new approach can achieve a speedup factor up to 65 and does not affect the registration accuracy when used in conjunction with the gradient correlation similarity measure. The improvement is independent of the underlying registration optimizer. Based on the results, a TEE/X-ray fusion could be performed with a higher frame rate and a shorter time lag towards real-time registration performance. The approach could potentially accelerate other applications of 2D-3D registrations, e.g. the registration of implant models with X-ray images.

  10. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  11. Use of 3D imaging in CT of the acute trauma patient: impact of a PACS-based software package.

    Soto, Jorge A; Lucey, Brain C; Stuhlfaut, Joshua W; Varghese, Jose C

    2005-04-01

    To evaluate the impact of a picture archiving and communication systems (PACS)-based software package on the requests for 3D reconstructions of multidetector CT (MDCT) data sets in the emergency radiology of a level 1 trauma center, we reviewed the number and type of physician requests for 3D reconstructions of MDCT data sets for patients admitted after sustaining multiple trauma, during a 12-month period (January 2003-December 2003). During the first 5 months of the study, 3D reconstructions were performed in dedicated workstations located separately from the emergency radiology CT interpretation area. During the last 7 months of the study, reconstructions were performed online by the attending radiologist or resident on duty, using a software package directly incorporated into the PACS workstations. The mean monthly number of 3D reconstructions requested during the two time periods was compared using Student's t test. The monthly mean +/- SD of 3D reconstructions performed before and after 3D software incorporation into the PACS was 34+/-7 (95% CI, 10-58) and 132+/-31 (95% CI, 111-153), respectively. This difference was statistically significant (p<0.0001). In the multiple trauma patient, implementation of PACS-integrated software increases utilization of 3D reconstructions of MDCT data sets. PMID:16028324

  12. 3D camera tracking from disparity images

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  13. Volumetric label-free imaging and 3D reconstruction of mammalian cochlea based on two-photon excitation fluorescence microscopy

    The visualization of the delicate structure and spatial relationship of intracochlear sensory cells has relied on the laborious procedures of tissue excision, fixation, sectioning and staining for light and electron microscopy. Confocal microscopy is advantageous for its high resolution and deep penetration depth, yet disadvantageous due to the necessity of exogenous labeling. In this study, we present the volumetric imaging of rat cochlea without exogenous dyes using a near-infrared femtosecond laser as the excitation mechanism and endogenous two-photon excitation fluorescence (TPEF) as the contrast mechanism. We find that TPEF exhibits strong contrast, allowing cellular and even subcellular resolution imaging of the cochlea, differentiating cell types, visualizing delicate structures and the radial nerve fiber. Our results further demonstrate that 3D reconstruction rendered with z-stacks of optical sections enables better revealment of fine structures and spatial relationships, and easily performed morphometric analysis. The TPEF-based optical biopsy technique provides great potential for new and sensitive diagnostic tools for hearing loss or hearing disorders, especially when combined with fiber-based microendoscopy. (paper)

  14. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

    Li, Ruijiang; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-01-01

    Purpose: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Methods: Given a set of volumetric images of a patient at N breathing phases as the training data, we perform deformable image registration between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, we can generate new DVFs, which, when applied on the reference image, lead to new volumetric images. We then can reconstruct a volumetric image from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. Our algorithm was implemented on graphics processing units...

  15. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    Cambridge : The Electromagnetics Academy, 2010, s. 1043-1046. ISBN 978-1-934142-14-1. [PIERS 2010 Cambridge. Cambridge (US), 05.07.2010-08.07.2010] R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : 3D reconstruction * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  16. Feasibility of 3D harmonic contrast imaging

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; Cate, ten F.; Jong, de N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suit

  17. Highway 3D model from image and lidar data

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  18. The role of 3-D imaging and computer-based postprocessing for surgery of the liver and pancreas

    Cross-sectional imaging based on navigation and virtual reality planning tools are well-established in the surgical routine in orthopedic surgery and neurosurgery. In various procedures, they have achieved a significant clinical relevance and efficacy and have enhanced the discipline's resection capabilities. In abdominal surgery, however, these tools have gained little attraction so far. Even with the advantage of fast and high resolution cross-sectional liver and pancreas imaging, it remains unclear whether 3D planning and interactive planning tools might increase precision and safety of liver and pancreas surgery. The inability to simply transfer the methodology from orthopedic or neurosurgery is mainly a result of intraoperative organ movements and shifting and corresponding technical difficulties in the on-line applicability of presurgical cross sectional imaging data. For the interactive planning of liver surgery, three systems partly exist in daily routine: HepaVision2 (MeVis GmbH, Bremen), LiverLive (Navidez Ltd. Slovenia) and OrgaNicer (German Cancer Research Center, Heidelberg). All these systems have realized a half- or full-automatic liver-segmentation procedure to visualize liver segments, vessel trees, resected volumes or critical residual organ volumes, either for preoperative planning or intraoperative visualization. Acquisition of data is mainly based on computed tomography. Three-dimensional navigation for intraoperative surgical guidance with ultrasound is part of the clinical testing. There are only few reports about the transfer of the visualization of the pancreas, probably caused by the difficulties with the segmentation routine due to inflammation or organ-exceeding tumor growth. With this paper, we like to evaluate and demonstrate the present status of software planning tools and pathways for future pre- and intraoperative resection planning in liver and pancreas surgery. (orig.)

  19. Distributed microscopy: toward a 3D computer-graphic-based multiuser microscopic manipulation, imaging, and measurement system

    Sulzmann, Armin; Carlier, Jerome; Jacot, Jacques

    1996-10-01

    The aim of this project is to telecontrol the movements in 3D-space of a microscope in order to manipulate and measure microsystems or micro parts aided by multi-user virtual reality (VR) environments. Presently microsystems are gaining in interest. Microsystems are small, independent modules, incorporating various functions, such as electronic, micro mechanical, data processing, optical, chemical, medical and biological functions. Though improving the manufacturing technologies, the measurement of the small structures to insure the quality of the process is a key information for the development. So far to measure the micro structures strong microscopes are needed. The use of highly magnifying computerized microscopes is expensive. To insure high quality measurements and distribute the acquired information to multi-user our proposed system is divided into three parts: the virtual reality microscopic environment (VRME)-based user-interface on a SGI workstation to prepare the manipulations and measurements. Secondly the computerized light microscope with the vision system inspecting the scene and getting the images of the specimen. Newly developed vision algorithms are used to analyze micro structures in the scene corresponding to the known a priori model. This vision is extracting position and shape of the objects and then transmitted as feedback to the user of the VRME-system to update his virtual environment. The internet demon is the third part of the system and distributes the information about the position of the micro structures, their shape and the images to the connected users who themselves may interact with the microscope (turn and displace the specimen on the back of a moving platform, or adding their structures to the scene and compare). The key idea behind our project VRME is to use the intuitiveness and the 3D visualization of VR environments coupled with a vision system to perform measurements of micro structures at a high accuracy. The direct

  20. 3D Membrane Imaging and Porosity Visualization

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  1. Neural Network Based 3D Surface Reconstruction

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  2. High-resolution non-invasive 3D imaging of paint microstructure by synchrotron-based X-ray laminography

    Reischig, Péter; Helfen, Lukas; Wallert, Arie; Baumbach, Tilo; Dik, Joris

    2013-06-01

    The characterisation of the microstructure and micromechanical behaviour of paint is key to a range of problems related to the conservation or technical art history of paintings. Synchrotron-based X-ray laminography is demonstrated in this paper to image the local sub-surface microstructure in paintings in a non-invasive and non-destructive way. Based on absorption and phase contrast, the method can provide high-resolution 3D maps of the paint stratigraphy, including the substrate, and visualise small features, such as pigment particles, voids, cracks, wood cells, canvas fibres etc. Reconstructions may be indicative of local density or chemical composition due to increased attenuation of X-rays by elements of higher atomic number. The paint layers and their interfaces can be distinguished via variations in morphology or composition. Results of feasibility tests on a painting mockup (oak panel, chalk ground, vermilion and lead white paint) are shown, where lateral and depth resolution of up to a few micrometres is demonstrated. The method is well adapted to study the temporal evolution of the stratigraphy in test specimens and offers an alternative to destructive sampling of original works of art.

  3. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  4. High-resolution non-invasive 3D imaging of paint microstructure by synchrotron-based X-ray laminography

    Reischig, Peter [Karlsruhe Institute of Technology, Institute for Photon Science and Synchrotron Radiation, Eggenstein-Leopoldshafen (Germany); Delft University of Technology, Department of Materials Science and Engineering, Delft (Netherlands); Helfen, Lukas [Karlsruhe Institute of Technology, Institute for Photon Science and Synchrotron Radiation, Eggenstein-Leopoldshafen (Germany); European Synchrotron Radiation Facility, BP 220, Grenoble Cedex (France); Wallert, Arie [Rijksmuseum, Postbus 74888, Amsterdam (Netherlands); Baumbach, Tilo [Karlsruhe Institute of Technology, Institute for Photon Science and Synchrotron Radiation, Eggenstein-Leopoldshafen (Germany); Dik, Joris [Delft University of Technology, Department of Materials Science and Engineering, Delft (Netherlands)

    2013-06-15

    The characterisation of the microstructure and micromechanical behaviour of paint is key to a range of problems related to the conservation or technical art history of paintings. Synchrotron-based X-ray laminography is demonstrated in this paper to image the local sub-surface microstructure in paintings in a non-invasive and non-destructive way. Based on absorption and phase contrast, the method can provide high-resolution 3D maps of the paint stratigraphy, including the substrate, and visualise small features, such as pigment particles, voids, cracks, wood cells, canvas fibres etc. Reconstructions may be indicative of local density or chemical composition due to increased attenuation of X-rays by elements of higher atomic number. The paint layers and their interfaces can be distinguished via variations in morphology or composition. Results of feasibility tests on a painting mockup (oak panel, chalk ground, vermilion and lead white paint) are shown, where lateral and depth resolution of up to a few micrometres is demonstrated. The method is well adapted to study the temporal evolution of the stratigraphy in test specimens and offers an alternative to destructive sampling of original works of art. (orig.)

  5. High-resolution non-invasive 3D imaging of paint microstructure by synchrotron-based X-ray laminography

    The characterisation of the microstructure and micromechanical behaviour of paint is key to a range of problems related to the conservation or technical art history of paintings. Synchrotron-based X-ray laminography is demonstrated in this paper to image the local sub-surface microstructure in paintings in a non-invasive and non-destructive way. Based on absorption and phase contrast, the method can provide high-resolution 3D maps of the paint stratigraphy, including the substrate, and visualise small features, such as pigment particles, voids, cracks, wood cells, canvas fibres etc. Reconstructions may be indicative of local density or chemical composition due to increased attenuation of X-rays by elements of higher atomic number. The paint layers and their interfaces can be distinguished via variations in morphology or composition. Results of feasibility tests on a painting mockup (oak panel, chalk ground, vermilion and lead white paint) are shown, where lateral and depth resolution of up to a few micrometres is demonstrated. The method is well adapted to study the temporal evolution of the stratigraphy in test specimens and offers an alternative to destructive sampling of original works of art. (orig.)

  6. 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping of......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....... treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  7. Backhoe 3D "gold standard" image

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  8. 2D and 3D Refraction Based X-ray Imaging Suitable for Clinical and Pathological Diagnosis

    Ando, Masami; Bando, Hiroko; Chen, Zhihua; Chikaura, Yoshinori; Choi, Chang-Hyuk; Endo, Tokiko; Esumi, Hiroyasu; Gang, Li; Hashimoto, Eiko; Hirano, Keiichi; Hyodo, Kazuyuki; Ichihara, Shu; Jheon, SangHoon; Kim, HongTae; Kim, JongKi; Kimura, Tatsuro; Lee, ChangHyun; Maksimenko, Anton; Ohbayashi, Chiho; Park, SungHwan; Shimao, Daisuke; Sugiyama, Hiroshi; Tang, Jintian; Ueno, Ei; Yamasaki, Katsuhito; Yuasa, Tetsuya

    2007-01-01

    The first observation of micro papillary (MP) breast cancer by x-ray dark-field imaging (XDFI) and the first observation of the 3D x-ray internal structure of another breast cancer, ductal carcinoma in-situ (DCIS), are reported. The specimen size for the sheet-shaped MP was 26 mm × 22 mm × 2.8 mm, and that for the rod-shaped DCIS was 3.6 mm in diameter and 4.7 mm in height. The experiment was performed at the Photon Factory, KEK: High Energy Accelerator Research Organization. We achieved a high-contrast x-ray image by adopting a thickness-controlled transmission-type angular analyzer that allows only refraction components from the object for 2D imaging. This provides a high-contrast image of cancer-cell nests, cancer cells and stroma. For x-ray 3D imaging, a new algorithm due to the refraction for x-ray CT was created. The angular information was acquired by x-ray optics diffraction-enhanced imaging (DEI). The number of data was 900 for each reconstruction. A reconstructed CT image may include ductus lactiferi, micro calcification and the breast gland. This modality has the possibility to open up a new clinical and pathological diagnosis using x-ray, offering more precise inspection and detection of early signs of breast cancer.

  9. 2D and 3D Refraction Based X-ray Imaging Suitable for Clinical and Pathological Diagnosis

    The first observation of micro papillary (MP) breast cancer by x-ray dark-field imaging (XDFI) and the first observation of the 3D x-ray internal structure of another breast cancer, ductal carcinoma in-situ (DCIS), are reported. The specimen size for the sheet-shaped MP was 26 mm x 22 mm x 2.8 mm, and that for the rod-shaped DCIS was 3.6 mm in diameter and 4.7 mm in height. The experiment was performed at the Photon Factory, KEK: High Energy Accelerator Research Organization. We achieved a high-contrast x-ray image by adopting a thickness-controlled transmission-type angular analyzer that allows only refraction components from the object for 2D imaging. This provides a high-contrast image of cancer-cell nests, cancer cells and stroma. For x-ray 3D imaging, a new algorithm due to the refraction for x-ray CT was created. The angular information was acquired by x-ray optics diffraction-enhanced imaging (DEI). The number of data was 900 for each reconstruction. A reconstructed CT image may include ductus lactiferi, micro calcification and the breast gland. This modality has the possibility to open up a new clinical and pathological diagnosis using x-ray, offering more precise inspection and detection of early signs of breast cancer

  10. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  11. A 3D image analysis tool for SPECT imaging

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  12. 3D-LSI technology for image sensor

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  13. A generic synthetic image generator package for the evaluation of 3D Digital Image Correlation and other computer vision-based measurement techniques

    Garcia, Dorian; Orteu, Jean-José; Robert, Laurent; Wattrisse, Bertrand; Bugarin, Florian

    2013-01-01

    Stereo digital image correlation (also called 3D DIC) is a common measurement technique in experimental mechanics for measuring 3D shapes or 3D displacement/strain fields, in research laboratories as well as in industry. Nevertheless, like most of the optical full-field measurement techniques, 3D DIC suffers from a lack of information about its metrological performances. For the 3D DIC technique to be fully accepted as a standard measurement technique it is of key importance to assess its mea...

  14. 3D IMAGING USING COHERENT SYNCHROTRON RADIATION

    Peter Cloetens

    2011-05-01

    Full Text Available Three dimensional imaging is becoming a standard tool for medical, scientific and industrial applications. The use of modem synchrotron radiation sources for monochromatic beam micro-tomography provides several new features. Along with enhanced signal-to-noise ratio and improved spatial resolution, these include the possibility of quantitative measurements, the easy incorporation of special sample environment devices for in-situ experiments, and a simple implementation of phase imaging. These 3D approaches overcome some of the limitations of 2D measurements. They require new tools for image analysis.

  15. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging

    Tseng, Hubert; Gage, Jacob A.; Shen, Tsaiwei; Haisler, William L.; Neeley, Shane K.; Shiao, Sue; Chen, Jianbo; Desai, Pujan K.; Liao, Angela; Hebel, Chris; Raphael, Robert M.; Becker, Jeanne L.; Souza, Glauco R.

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5′-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (−control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z’ = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  16. Low cost image-based modeling techniques for archaeological heritage digitalization: more than just a good tool for 3d visualization?

    Mariateresa Galizia; Cettina Santagati

    2013-01-01

    This study shows the first results of a research aimed at catching the potentiality of a series of low cost, free and open source tools (such as ARC3D, 123D Catch, Hypr3D). These tools are founded on the SfM (Structure from Motion) techniques and they are able to realize automatically image-based models starting from simple sequences of pictures data sets. Initially born as simple touristic 3D visualization (e.g. Photosynth) of archaeological and/or architectural sites or cultural assets (e.g...

  17. Integral imaging-based large-scale full-color 3-D display of holographic data by using a commercial LCD panel.

    Dong, Xiao-Bin; Ai, Ling-Yu; Kim, Eun-Soo

    2016-02-22

    We propose a new type of integral imaging-based large-scale full-color three-dimensional (3-D) display of holographic data based on direct ray-optical conversion of holographic data into elemental images (EIs). In the proposed system, a 3-D scene is modeled as a collection of depth-sliced object images (DOIs), and three-color hologram patterns for that scene are generated by interfering each color DOI with a reference beam, and summing them all based on Fresnel convolution integrals. From these hologram patterns, full-color DOIs are reconstructed, and converted into EIs using a ray mapping-based direct pickup process. These EIs are then optically reconstructed to be a full-color 3-D scene with perspectives on the depth-priority integral imaging (DPII)-based 3-D display system employing a large-scale LCD panel. Experiments with a test video confirm the feasibility of the proposed system in the practical application fields of large-scale holographic 3-D displays. PMID:26907021

  18. Improving Segmentation of 3D Retina Layers Based on Graph Theory Approach for Low Quality OCT Images

    Stankiewicz Agnieszka

    2016-06-01

    Full Text Available This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT. Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.

  19. View-based 3-D object retrieval

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  20. BM3D Frames and Variational Image Deblurring

    Danielyan, Aram; Egiazarian, Karen

    2011-01-01

    A family of the Block Matching 3-D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patch-wise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem: one given by minimization of the single objective function and another based on the Nash equilibrium balance of two objective functions. The latter results in an algorithm where the denoising and deblurring operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the Nash equilibrium formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool.

  1. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  2. Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching

    The registration of a three-dimensional (3D) ultrasound (US) image with a computed tomography (CT) or magnetic resonance image is beneficial in various clinical applications such as diagnosis and image-guided intervention of the liver. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment, and the success of this process strongly depends on the proper selection of initial transformation parameters. In this paper, we present an automatic feature-based affine registration procedure of 3D intra-operative US and pre-operative CT images of the liver. In the registration procedure, we first segment vessel lumens and the liver surface from a 3D B-mode US image. We then automatically estimate an initial registration transformation by using the proposed edge matching algorithm. The algorithm finds the most likely correspondences between the vessel centerlines of both images in a non-iterative manner based on a modified Viterbi algorithm. Finally, the registration is iteratively refined on the basis of the global affine transformation by jointly using the vessel and liver surface information. The proposed registration algorithm is validated on synthesized datasets and 20 clinical datasets, through both qualitative and quantitative evaluations. Experimental results show that automatic registration can be successfully achieved between 3D B-mode US and CT images even with a large initial misalignment.

  3. Micromachined Ultrasonic Transducers for 3-D Imaging

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... capable of producing 62+62-element row-column addressed CMUT arrays with negligible charging issues. The arrays include an integrated apodization, which reduces the ghost echoes produced by the edge waves in such arrays by 15:8 dB. The acoustical cross-talk is measured on fabricated arrays, showing a 24 d...

  4. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr [Department of Electrical Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Lee, Jae Young [Department of Radiology, Seoul National University Hospital, Seoul 110-744 (Korea, Republic of)

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  5. Image-Based and Range-Based 3d Modelling of Archaeological Cultural Heritage: the Telamon of the Temple of Olympian ZEUS in Agrigento (italy)

    Lo Brutto, M.; Spera, M. G.

    2011-09-01

    The Temple of Olympian Zeus in Agrigento (Italy) was one of the largest temple and at the same time one of the most original of all the Greek architecture. We don't know exactly how it was because the temple is now almost completely destroyed but it is very well-known for the presence of the Telamons. The Telamons were giant statues (about 8 meters high) probably located outside the temple to fill the interval between the columns. In accordance with the theory most accredited by archaeologists the Telamons were a decorative element and also a support for the structure. However, this hypothesis has never been scientifically proven. One Telamon has been reassembled and is shown at the Archaeological Museum of Agrigento. In 2009 a group of researchers at the University of Palermo has begun a study to test the hypothesis that the Telamons support the weight of the upper part of the temple. The study consists of a 3D survey of the Telamon, to reconstruct a detailed 3D digital model, and of a structural analysis with the Finite Element Method (FEM) to test the possibility that the Telamon could to support the weight of the upper portion of the temple. In this work the authors describe the 3D survey of Telamon carry out with Range-Based Modelling (RBM) and Image-Based Modeling (IBM). The RBM was performed with a TOF laser scanner while the IBM with the ZScan system of Menci Software and Image Master of Topcon. Several tests were conducted to analyze the accuracy of the different 3D models and to evaluate the difference between laser scanning and photogrammetric data. Moreover, an appropriate data reduction to generate a 3D model suitable for FEM analysis was tested.

  6. An image-based approach to the reconstruction of ancient architectures by extracting and arranging 3D spatial components

    Divya Udayan J; HyungSeok KIM; Jee-In KIM

    2015-01-01

    The objective of this research is the rapid reconstruction of ancient buildings of historical importance using a single image. The key idea of our approach is to reduce the infi nite solutions that might otherwise arise when recovering a 3D geometry from 2D photographs. The main outcome of our research shows that the proposed methodology can be used to reconstruct ancient monuments for use as proxies for digital effects in applications such as tourism, games, and entertainment, which do not require very accurate modeling. In this article, we consider the reconstruction of ancient Mughal architecture including the Taj Mahal. We propose a modeling pipeline that makes an easy reconstruction possible using a single photograph taken from a single view, without the need to create complex point clouds from multiple images or the use of laser scanners. First, an initial model is automatically reconstructed using locally fi tted planar primitives along with their boundary polygons and the adjacency relation among parts of the polygons. This approach is faster and more accurate than creating a model from scratch because the initial reconstruction phase provides a set of structural information together with the adjacency relation, which makes it possible to estimate the approximate depth of the entire structural monument. Next, we use manual extrapolation and editing techniques with modeling software to assemble and adjust different 3D components of the model. Thus, this research opens up the opportunity for the present generation to experience remote sites of architectural and cultural importance through virtual worlds and real-time mobile applications. Variations of a recreated 3D monument to represent an amalgam of various cultures are targeted for future work.

  7. Low cost image-based modeling techniques for archaeological heritage digitalization: more than just a good tool for 3d visualization?

    Mariateresa Galizia

    2013-11-01

    Full Text Available This study shows the first results of a research aimed at catching the potentiality of a series of low cost, free and open source tools (such as ARC3D, 123D Catch, Hypr3D. These tools are founded on the SfM (Structure from Motion techniques and they are able to realize automatically image-based models starting from simple sequences of pictures data sets. Initially born as simple touristic 3D visualization (e.g. Photosynth of archaeological and/or architectural sites or cultural assets (e.g. statues, fountains and so on, nowadays allow to reconstruct impressive photorealistic 3D models in short time and at very low costs. Therefore we have chosen different case studies with various levels of complexity (from the statues to the architectures in order to start a first testing on the modeling potentiality of these tools.

  8. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  9. Accuracy and inter-observer variability of 3D versus 4D cone-beam CT based image-guidance in SBRT for lung tumors

    Sweeney Reinhart A

    2012-06-01

    Full Text Available Abstract Background To analyze the accuracy and inter-observer variability of image-guidance (IG using 3D or 4D cone-beam CT (CBCT technology in stereotactic body radiotherapy (SBRT for lung tumors. Materials and methods Twenty-one consecutive patients treated with image-guided SBRT for primary and secondary lung tumors were basis for this study. A respiration correlated 4D-CT and planning contours served as reference for all IG techniques. Three IG techniques were performed independently by three radiation oncologists (ROs and three radiotherapy technicians (RTTs. Image-guidance using respiration correlated 4D-CBCT (IG-4D with automatic registration of the planning 4D-CT and the verification 4D-CBCT was considered gold-standard. Results were compared with two IG techniques using 3D-CBCT: 1 manual registration of the planning internal target volume (ITV contour and the motion blurred tumor in the 3D-CBCT (IG-ITV; 2 automatic registration of the planning reference CT image and the verification 3D-CBCT (IG-3D. Image quality of 3D-CBCT and 4D-CBCT images was scored on a scale of 1–3, with 1 being best and 3 being worst quality for visual verification of the IGRT results. Results Image quality was scored significantly worse for 3D-CBCT compared to 4D-CBCT: the worst score of 3 was given in 19 % and 7.1 % observations, respectively. Significant differences in target localization were observed between 4D-CBCT and 3D-CBCT based IG: compared to the reference of IG-4D, tumor positions differed by 1.9 mm ± 0.9 mm (3D vector on average using IG-ITV and by 3.6 mm ± 3.2 mm using IG-3D; results of IG-ITV were significantly closer to the reference IG-4D compared to IG-3D. Differences between the 4D-CBCT and 3D-CBCT techniques increased significantly with larger motion amplitude of the tumor; analogously, differences increased with worse 3D-CBCT image quality scores. Inter-observer variability was largest in SI direction and was

  10. 3D Wavelet-Based Filter and Method

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  11. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  12. Recovering 3D human pose from monocular images

    Agarwal, Ankur; Triggs, Bill

    2006-01-01

    We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We eva...

  13. Individualized directional microphone optimization in hearing aids based on reconstructing the 3D geometry of the head and ear from 2D images

    Harder, Stine; Paulsen, Rasmus Reinhold

    2015-01-01

    The goal of this thesis is to improve intelligibility for hearing-aid users by individualizing the directional microphone in a hearing aid. The general idea is a three step pipeline for easy acquisition of individually optimized directional filters. The first step is to estimate an individual 3D head model based on 2D images, the second step is to simulate individual head related transfer functions (HRTFs) based on the estimated 3D head model and the final step is to calculate optimal directi...

  14. Hybrid segmentation framework for 3D medical image analysis

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  15. Morphological image processing operators. Reduction of partial volume effects to improve 3D visualization based on CT data

    Aim: The quality of segmentation and three-dimensional reconstruction of anatomical structures in tomographic slices is often impaired by disturbances due to partial volume effects (PVE). The potential for artefact reduction by use of the morphological image processing operators (MO) erosion and dilation is investigated. Results: For all patients under review, the artefacts caused by PVE were significantly reduced by erosion (lung: Mean SBRpre=1.67, SBRpost=4.83; brain: SBRpre=1.06, SBRpost=1.29) even with only a small number of iterations. Region dilation was applied to integrate further structures (e.g. at tumor borders) into a configurable neighbourhood for segmentation and quantitative analysis. Conclusions: The MO represent an efficient approach for the reduction of PVE artefacts in 3D-CT reconstructions and allow optimised visualization of individual objects. (orig./AJ)

  16. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  17. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  18. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    Bertrand, M.M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Macri, F.; Beregi, J.P. [Nimes University Hospital, University Montpellier 1, Radiology Department, Nimes (France); Mazars, R.; Prudhomme, M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Nimes University Hospital, University Montpellier 1, Digestive Surgery Department, Nimes (France); Droupy, S. [Nimes University Hospital, University Montpellier 1, Urology-Andrology Department, Nimes (France)

    2014-08-15

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  19. A frequency-based approach to locate common structure for 2D-3D intensity-based registration of setup images in prostate radiotherapy

    In many radiotherapy clinics, geometric uncertainties in the delivery of 3D conformal radiation therapy and intensity modulated radiation therapy of the prostate are reduced by aligning the patient's bony anatomy in the planning 3D CT to corresponding bony anatomy in 2D portal images acquired before every treatment fraction. In this paper, we seek to determine if there is a frequency band within the portal images and the digitally reconstructed radiographs (DRRs) of the planning CT in which bony anatomy predominates over non-bony anatomy such that portal images and DRRs can be suitably filtered to achieve high registration accuracy in an automated 2D-3D single portal intensity-based registration framework. Two similarity measures, mutual information and the Pearson correlation coefficient were tested on carefully collected gold-standard data consisting of a kilovoltage cone-beam CT (CBCT) and megavoltage portal images in the anterior-posterior (AP) view of an anthropomorphic phantom acquired under clinical conditions at known poses, and on patient data. It was found that filtering the portal images and DRRs during the registration considerably improved registration performance. Without filtering, the registration did not always converge while with filtering it always converged to an accurate solution. For the pose-determination experiments conducted on the anthropomorphic phantom with the correlation coefficient, the mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters were θx:0.18(0.19) deg., θy:0.04(0.04) deg., θz:0.04(0.02) deg., tx:0.14(0.15) mm, ty:0.09(0.05) mm, and tz:0.49(0.40) mm. The mutual information-based registration with filtered images also resulted in similarly small errors. For the patient data, visual inspection of the superimposed registered images showed that they were correctly aligned in all instances. The results presented in this paper suggest that robust and accurate registration

  20. Performance evaluation of CCD- and mobile-phone-based near-infrared fluorescence imaging systems with molded and 3D-printed phantoms

    Wang, Bohan; Ghassemi, Pejhman; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2016-03-01

    Increasing numbers of devices are emerging which involve biophotonic imaging on a mobile platform. Therefore, effective test methods are needed to ensure that these devices provide a high level of image quality. We have developed novel phantoms for performance assessment of near infrared fluorescence (NIRF) imaging devices. Resin molding and 3D printing techniques were applied for phantom fabrication. Comparisons between two imaging approaches - a CCD-based scientific camera and an NIR-enabled mobile phone - were made based on evaluation of the contrast transfer function and penetration depth. Optical properties of the phantoms were evaluated, including absorption and scattering spectra and fluorescence excitation-emission matrices. The potential viability of contrastenhanced biological NIRF imaging with a mobile phone is demonstrated, and color-channel-specific variations in image quality are documented. Our results provide evidence of the utility of novel phantom-based test methods for quantifying image quality in emerging NIRF devices.

  1. Validity of computational hemodynamics in human arteries based on 3D time-of-flight MR angiography and 2D electrocardiogram gated phase contrast images

    Yu, Huidan (Whitney); Chen, Xi; Chen, Rou; Wang, Zhiqiang; Lin, Chen; Kralik, Stephen; Zhao, Ye

    2015-11-01

    In this work, we demonstrate the validity of 4-D patient-specific computational hemodynamics (PSCH) based on 3-D time-of-flight (TOF) MR angiography (MRA) and 2-D electrocardiogram (ECG) gated phase contrast (PC) images. The mesoscale lattice Boltzmann method (LBM) is employed to segment morphological arterial geometry from TOF MRA, to extract velocity profiles from ECG PC images, and to simulate fluid dynamics on a unified GPU accelerated computational platform. Two healthy volunteers are recruited to participate in the study. For each volunteer, a 3-D high resolution TOF MRA image and 10 2-D ECG gated PC images are acquired to provide the morphological geometry and the time-varying flow velocity profiles for necessary inputs of the PSCH. Validation results will be presented through comparisons of LBM vs. 4D Flow Software for flow rates and LBM simulation vs. MRA measurement for blood flow velocity maps. Indiana University Health (IUH) Values Fund.

  2. Photogrammetric 3D reconstruction using mobile imaging

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  3. A user-friendly nano-CT image alignment and 3D reconstruction platform based on LabVIEW

    Wang, Shenghao; Zhang, Kai; Wang, Zhili; Gao, Kun; Wu, Zhao; Zhu, Peiping; Wu, Ziyu

    2014-01-01

    X-ray computed tomography at the nanometer scale (nano-CT) offers a wide range of applications in scientific and industrial areas. Here we describe a reliable, user-friendly and fast software package based on LabVIEW that may allow to perform all procedures after the acquisition of raw projection images in order to obtain the inner structure of the investigated sample. A suitable image alignment process to address misalignment problems among image series due to mechanical manufacturing errors...

  4. Octree-based Robust Watermarking for 3D Model

    Su Cai

    2011-02-01

    Full Text Available Three robust blind watermarking methods of 3D models based on Octree are proposed in this paper: OTC-W, OTP-W and Zero-W. Primary Component Analysis and Octree partition are used on 3D meshes. A scrambled binary image for OTC-W and a scrambled RGB image for OTP-W are separately embedded adaptively into the single child nodes at the bottom level of Octree structure. The watermark can be extracted without the original image and 3D model. Those two methods have high embedding capacity for 3D meshes. Meanwhile, they are robust against geometric transformation (like translation, rotation, uniform scaling and vertex reordering attacks. For Zero-W, higher nodes of Octree are used to construct ‘Zero-watermark’, which can resist simplification, noise and remeshing attacks. All those three methods are fit for 3D point cloud data and arbitrary 3D meshes.Three robust blind watermarking methods of 3D models based on Octree are proposed in this paper: OTC-W, OTP-W and Zero-W. Primary Component Analysis and Octree partition are used on 3D meshes. A scrambled binary image for OTC-W and a scrambled RGB image for OTP-W are separately embedded adaptively into the single child nodes at the bottom level of Octree structure. The watermark can be extracted without the original image and 3D model. Those two methods have high embedding capacity for 3D meshes. Meanwhile, they are robust against geometric transformation (like translation, rotation, uniform scaling and vertex reordering attacks. For Zero-W, higher nodes of Octree are used to construct ‘Zero-watermark’, which can resist simplification, noise and remeshing attacks. All those three methods are fit for 3D point cloud data and arbitrary 3D meshes.

  5. Design, fabrication, and implementation of voxel-based 3D printed textured phantoms for task-based image quality assessment in CT

    Solomon, Justin; Ba, Alexandre; Diao, Andrew; Lo, Joseph; Bier, Elianna; Bochud, François; Gehm, Michael; Samei, Ehsan

    2016-03-01

    In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2

  6. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  7. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  8. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm in a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential

  9. Review: Polymeric-Based 3D Printing for Tissue Engineering

    Wu, Geng-Hsi; Hsu, Shan-hui

    2015-01-01

    Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue ...

  10. Automated curved planar reformation of 3D spine images

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  11. Handbook of 3D machine vision optical metrology and imaging

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  12. Imaging- and Flow Cytometry-based Analysis of Cell Position and the Cell Cycle in 3D Melanoma Spheroids.

    Beaumont, Kimberley A; Anfosso, Andrea; Ahmed, Farzana; Weninger, Wolfgang; Haass, Nikolas K

    2015-01-01

    Three-dimensional (3D) tumor spheroids are utilized in cancer research as a more accurate model of the in vivo tumor microenvironment, compared to traditional two-dimensional (2D) cell culture. The spheroid model is able to mimic the effects of cell-cell interaction, hypoxia and nutrient deprivation, and drug penetration. One characteristic of this model is the development of a necrotic core, surrounded by a ring of G1 arrested cells, with proliferating cells on the outer layers of the spheroid. Of interest in the cancer field is how different regions of the spheroid respond to drug therapies as well as genetic or environmental manipulation. We describe here the use of the fluorescence ubiquitination cell cycle indicator (FUCCI) system along with cytometry and image analysis using commercial software to characterize the cell cycle status of cells with respect to their position inside melanoma spheroids. These methods may be used to track changes in cell cycle status, gene/protein expression or cell viability in different sub-regions of tumor spheroids over time and under different conditions. PMID:26779761

  13. Progress in 3D imaging and display by integral imaging

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  14. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  15. Perception of detail in 3D images

    Heyndrickx, I.; Kaptein, R.

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads t

  16. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction.

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-03-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  17. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  18. A neural network-based 2D/3D image registration quality evaluator for pediatric patient setup in external beam radiotherapy.

    Wu, Jian; Su, Zhong; Li, Zuofeng

    2016-01-01

    Our purpose was to develop a neural network-based registration quality evaluator (RQE) that can improve the 2D/3D image registration robustness for pediatric patient setup in external beam radiotherapy. Orthogonal daily setup X-ray images of six pediatric patients with brain tumors receiving proton therapy treatments were retrospectively registered with their treatment planning computed tomography (CT) images. A neural network-based pattern classifier was used to determine whether a registration solution was successful based on geometric features of the similarity measure values near the point-of-solution. Supervised training and test datasets were generated by rigidly registering a pair of orthogonal daily setup X-ray images to the treatment planning CT. The best solution for each registration task was selected from 50 optimizing attempts that differed only by the randomly generated initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error tolerance to determine whether that solution was acceptable. A supervised training was then used to train the RQE. Performance of the RQE was evaluated using test dataset consisting of registration results that were not used in training. The RQE was integrated with our in-house 2D/3D registration system and its performance was evaluated using the same patient dataset. With an optimized sampling step size (i.e., 5 mm) in the feature space, the RQE has the sensitivity and the speci-ficity in the ranges of 0.865-0.964 and 0.797-0.990, respectively, when used to detect registration error with mean voxel displacement (MVD) greater than 1 mm. The trial-to-acceptance ratio of the integrated 2D/3D registration system, for all patients, is equal to 1.48. The final acceptance ratio is 92.4%. The proposed RQE can potentially be used in a 2D/3D rigid image registration system to improve the overall robustness by rejecting

  19. An automated 3D reconstruction method of UAV images

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  20. Extracting 3D layout from a single image using global image structures.

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  1. A new image reconstruction method for 3-D PET based upon pairs of near-missing lines of response

    We formerly introduced a new image reconstruction method for three-dimensional positron emission tomography, which is based upon pairs of near-missing lines of response. This method uses an elementary geometric property of lines of response, namely that two lines of response which originate from radioactive isotopes located within a sufficiently small voxel, will lie within a few millimeters of each other. The effectiveness of this method was verified by performing a simulation using GATE software and a digital Hoffman phantom

  2. 3D wavefront image formation for NIITEK GPR

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  3. Thermomechanical behaviour of two heterogeneous tungsten materials via 2D and 3D image-based FEM

    An advanced numerical procedure based on imaging of the material microstructure (Image- Based Finite Element Method or Image-Based FEM) was extended and applied to model the thermomechanical behaviour of novel materials for fusion applications. Two tungsten based heterogeneous materials with different random morphologies have been chosen as challenging case studies: (1) a two-phase mixed ductile-brittle W/CuCr1Zr composite and (2) vacuum plasma-sprayed tungsten (VPS-W 75 vol.%), a porous coating system with complex dual-scale microstructure. Both materials are designed for the future fusion reactor DEMO: W/CuCr1Zr as main constituent of a layered functionally graded joint between plasma-facing armor and heat sink whereas VPS-W for covering the first wall of the reactor vessel in direct contact with the plasma. The primary focus of this work was to investigate the mesoscopic material behaviour and the linkage to the macroscopic response in modeling failure and heat-transfer. Particular care was taken in validating and integrating simulation findings with experimental inputs. The solution of the local thermomechanical behaviour directly on the real material microstructure enabled meaningful insights into the complex failure mechanism of both materials. For W/CuCr1Zr full macroscopic stress-strain curves including the softening and failure part could be simulated and compared with experimental ones at different temperatures, finding an overall good agreement. The comparison of simulated and experimental macroscopic behaviour of plastic deformation and rupture also showed the possibility to indirectly estimate micro- and mesoscale material parameters. Both heat conduction and elastic behaviour of VPS-W have been extensively investigated. New capabilities of the Image-Based FEM could be shown: decomposition of the heat transfer reduction as due to the individual morphological phases and back-fitting of the reduced stiffness at interlamellar boundaries. The

  4. Practical pseudo-3D registration for large tomographic images

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  5. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  6. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  7. A Legendre orthogonal moment based 3D edge operator

    ZHANG Hui; SHU Huazhong; LUO Limin; J. L. Dillenseger

    2005-01-01

    This paper presents a new 3D edge operator based on Legendre orthogonal moments. This operator can be used to extract the edge of 3D object in any window size,with more accurate surface orientation and more precise surface location. It also has full geometry meaning. Process of calculation is considered in the moment based method.We can greatly speed up the computation by calculating out the masks in advance. We integrate this operator into our rendering of medical image data based on ray casting algorithm. Experimental results show that it is an effective 3D edge operator that is more accurate in position and orientation.

  8. 3D acoustic imaging applied to the Baikal neutrino telescope

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  9. 3D acoustic imaging applied to the Baikal neutrino telescope

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  10. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.