WorldWideScience

Sample records for 3d optical imaging

  1. Multiplane 3D superresolution optical fluctuation imaging

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  2. Handbook of 3D machine vision optical metrology and imaging

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  3. Diffractive optical element for creating visual 3D images.

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  4. Large Scale 3D Image Reconstruction in Optical Interferometry

    Schutz, Antony; Mary, David; Thiébaut, Eric; Soulez, Ferréol

    2015-01-01

    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

  5. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  6. Optical 3D watermark based digital image watermarking for telemedicine

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  7. Joint Applied Optics and Chinese Optics Letters Feature Introduction: Digital Holography and 3D Imaging

    Ting-Chung Poon; Changhe Zhou; Toyohiko Yatagai; Byoungho Lee; Hongchen Zhai

    2011-01-01

    This feature issue is the fifth installment on digital holography since its inception four years ago.The last four issues have been published after the conclusion of each Topical Meeting "Digital Holography and 3D imaging (DH)." However,this feature issue includes a new key feature-Joint Applied Optics and Chinese Optics Letters Feature Issue.The DH Topical Meeting is the world's premier forum for disseminating the science and technology geared towards digital holography and 3D information processing.Since the meeting's inception in 2007,it has steadily and healthily grown to 130 presentations this year,held in Tokyo,Japan,May 2011.

  8. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  9. Lensfree Optical Tomography for High-Throughput 3D Imaging on a Chip

    ISIKMAN, SERHAN OMER

    2012-01-01

    Light microscopes provide us with the key to observe objects that are orders of magnitude smaller than what the unaided eye can see. Therefore, microscopy has been the cornerstone of science and medicine for centuries. Recently, optical microscopy has seen a growing interest in developing three-dimensional (3D) imaging techniques that enable sectional imaging of biological specimen. These imaging techniques, however, are generally quite complex, bulky and expensive in addition to having a lim...

  10. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  11. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    Guerrero, Thomas [Division of Radiation Oncology, University of Texas M D Anderson Cancer Center, Houston, TX 77030 (United States); Zhang, Geoffrey [Division of Radiation Oncology, University of Texas M D Anderson Cancer Center, Houston, TX 77030 (United States); Huang Tzungchi [Division of Radiation Oncology, University of Texas M D Anderson Cancer Center, Houston, TX 77030 (United States); Lin Kaping [Department of Electrical Engineering, Chung-Yuan University, Taipei, Taiwan (China)

    2004-09-07

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction.

  12. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  13. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction

  14. Quantification of smoothing requirement for 3D optic flow calculation of volumetric images

    Bab-Hadiashar, Alireza; Tennakoon, Ruwan B.; de Bruijne, Marleen

    2013-01-01

    Complexities of dynamic volumetric imaging challenge the available computer vision techniques on a number of different fronts. This paper examines the relationship between the estimation accuracy and required amount of smoothness for a general solution from a robust statistics perspective. We show...... that a (surprisingly) small amount of local smoothing is required to satisfy both the necessary and sufficient conditions for accurate optic flow estimation. This notion is called 'just enough' smoothing, and its proper implementation has a profound effect on the preservation of local information in processing 3D...

  15. Large area 3-D optical coherence tomography imaging of lumpectomy specimens for radiation treatment planning

    Wang, Cuihuan; Kim, Leonard; Barnard, Nicola; Khan, Atif; Pierce, Mark C.

    2016-02-01

    Our long term goal is to develop a high-resolution imaging method for comprehensive assessment of tissue removed during lumpectomy procedures. By identifying regions of high-grade disease within the excised specimen, we aim to develop patient-specific post-operative radiation treatment regimens. We have assembled a benchtop spectral-domain optical coherence tomography (SD-OCT) system with 1320 nm center wavelength. Automated beam scanning enables "sub-volumes" spanning 5 mm x 5 mm x 2 mm (500 A-lines x 500 B-scans x 2 mm in depth) to be collected in under 15 seconds. A motorized sample positioning stage enables multiple sub-volumes to be acquired across an entire tissue specimen. Sub-volumes are rendered from individual B-scans in 3D Slicer software and en face (XY) images are extracted at specific depths. These images are then tiled together using MosaicJ software to produce a large area en face view (up to 40 mm x 25 mm). After OCT imaging, specimens were sectioned and stained with HE, allowing comparison between OCT image features and disease markers on histopathology. This manuscript describes the technical aspects of image acquisition and reconstruction, and reports initial qualitative comparison between large area en face OCT images and HE stained tissue sections. Future goals include developing image reconstruction algorithms for mapping an entire sample, and registering OCT image volumes with clinical CT and MRI images for post-operative treatment planning.

  16. A 3D approach to reconstruct continuous optical images using lidar and MODIS

    HuaGuo; Huang; Jun; Lian

    2015-01-01

    Background: Monitoring forest health and biomass for changes over time in the global environment requires the provision of continuous satellite images. However, optical images of land surfaces are generally contaminated when clouds are present or rain occurs.Methods: To estimate the actual reflectance of land surfaces masked by clouds and potential rain, 3D simulations by the RAPID radiative transfer model were proposed and conducted on a forest farm dominated by birch and larch in Genhe City, Da Xing’An Ling Mountain in Inner Mongolia, China. The canopy height model(CHM) from lidar data were used to extract individual tree structures(location, height, crown width). Field measurements related tree height to diameter of breast height(DBH), lowest branch height and leaf area index(LAI). Series of Landsat images were used to classify tree species and land cover. MODIS LAI products were used to estimate the LAI of individual trees. Combining all these input variables to drive RAPID, high-resolution optical remote sensing images were simulated and validated with available satellite images.Results: Evaluations on spatial texture, spectral values and directional reflectance were conducted to show comparable results.Conclusions: The study provides a proof-of-concept approach to link lidar and MODIS data in the parameterization of RAPID models for high temporal and spatial resolutions of image reconstruction in forest dominated areas.

  17. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    Paoli Alessandro

    2011-02-01

    Full Text Available Abstract Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast and preoperative (radiographic template models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology.

  18. Exact surface registration of retinal surfaces from 3-D optical coherence tomography images.

    Lee, Sieun; Lebed, Evgeniy; Sarunic, Marinko V; Beg, Mirza Faisal

    2015-02-01

    Nonrigid registration of optical coherence tomography (OCT) images is an important problem in studying eye diseases, evaluating the effect of pharmaceuticals in treating vision loss, and performing group-wise cross-sectional analysis. High dimensional nonrigid registration algorithms required for cross-sectional and longitudinal analysis are still being developed for accurate registration of OCT image volumes, with the speckle noise in images presenting a challenge for registration. Development of algorithms for segmentation of OCT images to generate surface models of retinal layers has advanced considerably and several algorithms are now available that can segment retinal OCT images into constituent retinal surfaces. Important morphometric measurements can be extracted if accurate surface registration algorithm for registering retinal surfaces onto corresponding template surfaces were available. In this paper, we present a novel method to perform multiple and simultaneous retinal surface registration, targeted to registering surfaces extracted from ocular volumetric OCT images. This enables a point-to-point correspondence (homology) between template and subject surfaces, allowing for a direct, vertex-wise comparison of morphometric measurements across subject groups. We demonstrate that this approach can be used to localize and analyze regional changes in choroidal and nerve fiber layer thickness among healthy and glaucomatous subjects, allowing for cross-sectional population wise analysis. We also demonstrate the method's ability to track longitudinal changes in optic nerve head morphometry, allowing for within-individual tracking of morphometric changes. This method can also, in the future, be used as a precursor to 3-D OCT image registration to better initialize nonrigid image registration algorithms closer to the desired solution. PMID:25312906

  19. Analytical models of icosahedral shells for 3D optical imaging of viruses

    Jafarpour, Aliakbar

    2014-01-01

    A modulated icosahedral shell with an inclusion is a concise description of many viruses, including recently-discovered large double-stranded DNA ones. Many X-ray scattering patterns of such viruses show major polygonal fringes, which can be reproduced in image reconstruction with a homogeneous icosahedral shell. A key question regarding a low-resolution reconstruction is how to introduce further changes to the 3D profile in an efficient way with only a few parameters. Here, we derive and compile different analytical models of such an object with consideration of practical optical setups and typical structures of such viruses. The benefits of such models include 1) inherent filtering and suppressing different numerical errors of a discrete grid, 2) providing a concise and meaningful set of descriptors for feature extraction in high-throughput classification/sorting and higher-resolution cumulative reconstructions, 3) disentangling (physical) resolution from (numerical) discretization step and having a vector ...

  20. Analytic 3D Imaging of Mammalian Nucleus at Nanoscale Using Coherent X-Rays and Optical Fluorescence Microscopy

    Song, Changyong; Takagi, Masatoshi; Park, Jaehyun; Xu, Rui; Gallagher-Jones, Marcus; Imamoto, Naoko; Ishikawa, Tetsuya

    2014-01-01

    Despite the notable progress that has been made with nano-bio imaging probes, quantitative nanoscale imaging of multistructured specimens such as mammalian cells remains challenging due to their inherent structural complexity. Here, we successfully performed three-dimensional (3D) imaging of mammalian nuclei by combining coherent x-ray diffraction microscopy, explicitly visualizing nuclear substructures at several tens of nanometer resolution, and optical fluorescence microscopy, cross confir...

  1. Full-color holographic 3D imaging system using color optical scanning holography

    Kim, Hayan; Kim, You Seok; Kim, Taegeun

    2016-06-01

    We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.

  2. Heterodyne 3D ghost imaging

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  3. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-03-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up.

  4. Phase-retrieved optical projection tomography for 3D imaging through scattering layers

    Ancora, Daniele; Di Battista, Diego; Giasafaki, Georgia; Psycharakis, Stylianos; Liapis, Evangelos; Zacharopoulos, Athanasios; Zacharakis, Giannis

    2016-03-01

    Recently great progress has been made in biological and biomedical imaging by combining non-invasive optical methods, novel adaptive light manipulation and computational techniques for intensity-based phase recovery and three dimensional image reconstruction. In particular and in relation to the work presented here, Optical Projection Tomography (OPT) is a well-established technique for imaging mostly transparent absorbing biological models such as C. Elegans and Danio Rerio. On the contrary, scattering layers like the cocoon surrounding the Drosophila during the pupae stage constitutes a challenge for three dimensional imaging through such a complex structure. However, recent studies enabled image reconstruction through scattering curtains up to few transport mean free paths via phase retrieval iterative algorithms allowing to uncover objects hidden behind complex layers. By combining these two techniques we explore the possibility to perform a three dimensional image reconstruction of fluorescent objects embedded between scattering layers without compromising its structural integrity. Dynamical cross correlation registration was implemented for the registration process due to translational and flipping ambiguity of the phase retrieval problem, in order to provide the correct aligned set of data to perform the back-projection reconstruction. We have thus managed to reconstruct a hidden complex object between static scattering curtains and compared with the effective reconstruction to fully understand the process before the in-vivo biological implementation.

  5. Flattop beam illumination for 3D imaging ladar with simple optical devices in the wide distance range

    Tsuji, Hidenobu; Nakano, Takayuki; Matsumoto, Yoshihiro; Kameyama, Shumpei

    2016-04-01

    We have developed an illumination optical system for 3D imaging ladar (laser detection and ranging) which forms flattop beam shape by transformation of the Gaussian beam in the wide distance range. The illumination is achieved by beam division and recombination using a prism and a negative powered lens. The optimum condition of the transformation by the optical system is derived. It is confirmed that the flattop distribution can be formed in the wide range of the propagation distance from 1 to 1000 m. The experimental result with the prototype is in good agreement with the calculation result.

  6. 3D registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Wen, Di; Brandt, Eric; van Ditzhuijzen, Nienke S.; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Farmazilian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    High resolution, 100 frames/sec intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and 3D registration methods, to provide validation of IVOCT pullback volumes using microscopic, brightfield and fluorescent cryoimage volumes, with optional, exactly registered cryo-histology. The innovation was a method to match an IVOCT pullback images, acquired in the catheter reference frame, to a true 3D cryo-image volume. Briefly, an 11-parameter, polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Local minima were possible, but when we started within reasonable ranges, every one of 24 digital phantom cases converged to a good solution with a registration error of only +1.34+/-2.65μm (signed distance). Registration was applied to 10 ex-vivo cadaver coronary arteries (LADs), resulting in 10 registered cryo and IVOCT volumes yielding a total of 421 registered 2D-image pairs. Image overlays demonstrated high continuity between vascular and plaque features. Bland- Altman analysis comparing cryo and IVOCT lumen area, showed mean and standard deviation of differences as 0.01+/-0.43 mm2. DICE coefficients were 0.91+/-0.04. Finally, visual assessment on 20 representative cases with easily identifiable features suggested registration accuracy within one frame of IVOCT (+/-200μm), eliminating significant misinterpretations introduced by 1mm errors in the literature. The method will provide 3D data for training of IVOCT plaque algorithms and can be used for validation of other intravascular imaging modalities.

  7. 3D Imager and Method for 3D imaging

    Kumar, P.; Staszewski, R.; Charbon, E.

    2013-01-01

    3D imager comprising at least one pixel, each pixel comprising a photodetectorfor detecting photon incidence and a time-to-digital converter system configured for referencing said photon incidence to a reference clock, and further comprising a reference clock generator provided for generating the re

  8. Optically clearing tissue as an initial step for 3D imaging of core biopsies to diagnose pancreatic cancer

    Das, Ronnie; Agrawal, Aishwarya; Upton, Melissa P.; Seibel, Eric J.

    2014-02-01

    The pancreas is a deeply seated organ requiring endoscopically, or radiologically guided biopsies for tissue diagnosis. Current approaches include either fine needle aspiration biopsy (FNA) for cytologic evaluation, or core needle biopsies (CBs), which comprise of tissue cores (L = 1-2 cm, D = 0.4-2.0 mm) for examination by brightfield microscopy. Between procurement and visualization, biospecimens must be processed, sectioned and mounted on glass slides for 2D visualization. Optical information about the native tissue state can be lost with each procedural step and a pathologist cannot appreciate 3D organization from 2D observations of tissue sections 1-8 μm in thickness. Therefore, how might histological disease assessment improve if entire, intact CBs could be imaged in both brightfield and 3D? CBs are mechanically delicate; therefore, a simple device was made to cut intact, simulated CBs (L = 1-2 cm, D = 0.2-0.8 mm) from porcine pancreas. After CBs were laid flat in a chamber, z-stack images at 20x and 40x were acquired through the sample with and without the application of an optical clearing agent (FocusClear®). Intensity of transmitted light increased by 5-15x and islet structures unique to pancreas were clearly visualized 250-300 μm beneath the tissue surface. CBs were then placed in index matching square capillary tubes filled with FocusClear® and a standard optical clearing agent. Brightfield z-stack images were then acquired to present 3D visualization of the CB to the pathologist.

  9. 3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma

    Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.

    2010-03-01

    The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.

  10. 3D vector flow imaging

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  11. Miniaturized 3D microscope imaging system

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  12. Two-photon imaging of a magneto-fluorescent indicator for 3D optical magnetometry.

    Lee, Hohjai; Brinks, Daan; Cohen, Adam E

    2015-10-19

    We developed an optical method to visualize the three-dimensional distribution of magnetic field strength around magnetic microstructures. We show that the two-photon-excited fluorescence of a chained donor-bridge-acceptor compound, phenanthrene-(CH2)12-O-(CH2)2-N,N-dimethylaniline, is sensitive to ambient magnetic field strength. A test structure is immersed in a solution of the magneto-fluorescent indicator and a custom two-photon microscope maps the fluorescence of this compound. The decay kinetics of the electronic excited state provide a measure of magnetic field that is insensitive to photobleaching, indicator concentration, or local variations in optical excitation or collection efficiency. PMID:26480460

  13. Intensifying the response of distributed optical fibre sensors using 2D and 3D image restoration

    Soto, Marcelo A.; Jaime A. Ramírez; Thévenaz, Luc

    2016-01-01

    Distributed optical fibre sensors possess the unique capability of measuring the spatial and temporal map of environmental quantities that can be of great interest for several field applications. Although existing methods for performance enhancement have enabled important progresses in the field, they do not take full advantage of all information present in the measured data, still giving room for substantial improvement over the state-of-the-art. Here we propose and experimentally demonstrat...

  14. Gabor-domain optical coherence microscopy with integrated dual-axis MEMS scanner for fast 3D imaging and metrology

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Santhanam, Anand P.; Tankam, Patrice; Rolland, Jannick P.

    2015-10-01

    Fast, robust, nondestructive 3D imaging is needed for characterization of microscopic structures in industrial and clinical applications. A custom micro-electromechanical system (MEMS)-based 2D scanner system was developed to achieve 55 kHz A-scan acquisition in a Gabor-domain optical coherence microscopy (GD-OCM) instrument with a novel multilevel GPU architecture for high-speed imaging. GD-OCM yields high-definition volumetric imaging with dynamic depth of focusing through a bio-inspired liquid lens-based microscope design, which has no moving parts and is suitable for use in a manufacturing setting or in a medical environment. A dual-axis MEMS mirror was chosen to replace two single-axis galvanometer mirrors; as a result, the astigmatism caused by the mismatch between the optical pupil and the scanning location was eliminated and a 12x reduction in volume of the scanning system was achieved. Imaging at an invariant resolution of 2 μm was demonstrated throughout a volume of 1 × 1 × 0.6 mm3, acquired in less than 2 minutes. The MEMS-based scanner resulted in improved image quality, increased robustness and lighter weight of the system - all factors that are critical for on-field deployment. A custom integrated feedback system consisting of a laser diode and a position-sensing detector was developed to investigate the impact of the resonant frequency of the MEMS and the driving signal of the scanner on the movement of the mirror. Results on the metrology of manufactured materials and characterization of tissue samples with GD-OCM are presented.

  15. Comparison of 3D double inversion recovery and 2D STIR FLAIR MR sequences for the imaging of optic neuritis: pilot study

    Hodel, Jerome; Bocher, Anne-Laure; Pruvo, Jean-Pierre; Leclerc, Xavier [Hopital Roger Salengro, Department of Neuroradiology, Lille (France); Outteryck, Olivier; Zephir, Helene; Vermersch, Patrick [Hopital Roger Salengro, Department of Neurology, Lille (France); Lambert, Oriane [Fondation Ophtalmologique Rothschild, Department of Neuroradiology, Paris (France); Benadjaoud, Mohamed Amine [Radiation Epidemiology Team, Inserm, CESP Centre for Research in Epidemiology and Population Health, U1018, Villejuif (France); Chechin, David [Philips Medical Systems, Suresnes (France)

    2014-12-15

    We compared the three-dimensional (3D) double inversion recovery (DIR) magnetic resonance imaging (MRI) sequence with the coronal two-dimensional (2D) short tau inversion recovery (STIR) fluid-attenuated inversion recovery (FLAIR) for the detection of optic nerve signal abnormality in patients with optic neuritis (ON). The study group consisted of 31 patients with ON (44 pathological nerves) confirmed by visual-evoked potentials used as the reference. MRI examinations included 2D coronal STIR FLAIR and 3D DIR with 3-mm coronal reformats to match with STIR FLAIR. Image artefacts were graded for each portion of the optic nerves. Each set of MR images (2D STIR FLAIR, DIR reformats and multiplanar 3D DIR) was examined independently and separately for the detection of signal abnormality. Cisternal portion of optic nerves was better delineated with DIR (p < 0.001), while artefacts impaired analysis in four patients with STIR FLAIR. Inter-observer agreement was significantly improved (p < 0.001) on 3D DIR (κ = 0.96) compared with STIR FLAIR images (κ = 0.60). Multiplanar DIR images reached the best performance for the diagnosis of ON (95 % sensitive and 94 % specific). Our study showed a high sensitivity and specificity of 3D DIR compared with STIR FLAIR for the detection of ON. These findings suggest that the 3D DIR sequence may be more useful in patients suspected of ON. (orig.)

  16. Advanced 3-D Ultrasound Imaging

    Rasmussen, Morten Fischer

    been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... and removes the need to integrate custom made electronics into the probe. A downside of row-column addressing 2-D arrays is the creation of secondary temporal lobes, or ghost echoes, in the point spread function. In the second part of the scientific contributions, row-column addressing of 2-D arrays...... was investigated. An analysis of how the ghost echoes can be attenuated was presented.Attenuating the ghost echoes were shown to be achieved by minimizing the first derivative of the apodization function. In the literature, a circular symmetric apodization function was proposed. A new apodization layout...

  17. Fabrication and characterization of a 3-D non-homogeneous tissue-like mouse phantom for optical imaging

    Avtzi, Stella; Zacharopoulos, Athanasios; Psycharakis, Stylianos; Zacharakis, Giannis

    2013-11-01

    In vivo optical imaging of biological tissue not only requires the development of new theoretical models and experimental procedures, but also the design and construction of realistic tissue-mimicking phantoms. However, most of the phantoms available currently in literature or the market, have either simple geometrical shapes (cubes, slabs, cylinders) or when realistic in shape they use homogeneous approximations of the tissue or animal under investigation. The goal of this study is to develop a non-homogeneous realistic phantom that matches the anatomical geometry and optical characteristics of the mouse head in the visible and near-infrared spectral range. The fabrication of the phantom consisted of three stages. Initially, anatomical information extracted from either mouse head atlases or structural imaging modalities (MRI, XCT) was used to design a digital phantom comprising of the three main layers of the mouse head; the brain, skull and skin. Based on that, initial prototypes were manufactured by using accurate 3D printing, allowing complex objects to be built layer by layer with sub-millimeter resolution. During the second stage the fabrication of individual molds was performed by embedding the prototypes into a rubber-like silicone mixture. In the final stage the detailed phantom was constructed by loading the molds with epoxy resin of controlled optical properties. The optical properties of the resin were regulated by using appropriate quantities of India ink and intralipid. The final phantom consisted of 3 layers, each one with different absorption and scattering coefficient (μa,μs) to simulate the region of the mouse brain, skull and skin.

  18. Backhoe 3D "gold standard" image

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  19. Three-Dimensional Optical Coherence Tomography (3D OCT) Project

    National Aeronautics and Space Administration — Applied Science Innovations, Inc. proposes a new tool of 3D optical coherence tomography (OCT) for cellular level imaging at video frame rates and dramatically...

  20. Three-Dimensional Optical Coherence Tomography (3D OCT) Project

    National Aeronautics and Space Administration — Applied Science Innovations, Inc. proposes to develop a new tool of 3D optical coherence tomography (OCT) for cellular level imaging at video frame rates and...

  1. Light field display and 3D image reconstruction

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  2. 3D Chaotic Functions for Image Encryption

    Pawan N. Khade

    2012-05-01

    Full Text Available This paper proposes the chaotic encryption algorithm based on 3D logistic map, 3D Chebyshev map, and 3D, 2D Arnolds cat map for color image encryption. Here the 2D Arnolds cat map is used for image pixel scrambling and 3D Arnolds cat map is used for R, G, and B component substitution. 3D Chebyshev map is used for key generation and 3D logistic map is used for image scrambling. The use of 3D chaotic functions in the encryption algorithm provide more security by using the, shuffling and substitution to the encrypted image. The Chebyshev map is used for public key encryption and distribution of generated private keys.

  3. 3D Reconstruction of NMR Images

    Peter Izak; Milan Smetana; Libor Hargas; Miroslav Hrianka; Pavol Spanik

    2007-01-01

    This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  4. 3D ultrafast ultrasound imaging in vivo

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability. (fast track communication)

  5. Optical characterization and measurements of autostereoscopic 3D displays

    Salmimaa, Marja; Järvenpää, Toni

    2008-04-01

    3D or autostereoscopic display technologies offer attractive solutions for enriching the multimedia experience. However, both characterization and comparison of 3D displays have been challenging when the definitions for the consistent measurement methods have been lacking and displays with similar specifications may appear quite different. Earlier we have investigated how the optical properties of autostereoscopic (3D) displays can be objectively measured and what are the main characteristics defining the perceived image quality. In this paper the discussion is extended to cover the viewing freedom (VF) and the definition for the optimum viewing distance (OVD) is elaborated. VF is the volume inside which the eyes have to be to see an acceptable 3D image. Characteristics limiting the VF space are proposed to be 3D crosstalk, luminance difference and color difference. Since the 3D crosstalk can be presumed to be dominating the quality of the end user experience and in our approach is forming the basis for the calculations of the other optical parameters, the reliability of the 3D crosstalk measurements is investigated. Furthermore the effect on the derived VF definition is evaluated. We have performed comparison 3D crosstalk measurements with different measurement device apertures and the effect of different measurement geometry on the results on actual 3D displays is reported.

  6. High-resolution 3D phase imaging using a partitioned detection aperture: a wave-optic analysis

    Barankov, Roman; Baritaux, Jean-Charles; Mertz, Jerome

    2015-01-01

    Quantitative phase imaging has become a topic of considerable interest in the microscopy community. We have recently described one such technique based on the use of a partitioned detection aperture, which can be operated in a single shot with an extended source [Opt. Lett. 37, 4062 (2012)]. We follow up on this work by providing a rigorous theory of our technique using paraxial wave optics, where we derive fully three-dimensional spread functions for both phase and intensity. Using these fun...

  7. 3D Reconstruction of NMR Images

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  8. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  9. Flexydos3D: A new deformable anthropomorphic 3D dosimeter readout with optical CT scanning

    A new deformable polydimethylsiloxane (PDMS) based dosimeter is proposed that can be cast in an anthropomorphic shape and that can be used for 3D radiation dosimetry of deformable targets. The new material has additional favorable characteristics as it is tissue equivalent for high-energy photons, easy to make and is non-toxic. In combination with dual wavelength optical scanning, it is a powerful dosimeter for dose verification of image gated or organ tracked radiotherapy with moving and deforming targets

  10. Automatic respiration tracking for radiotherapy using optical 3D camera

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  11. 3D-LSI technology for image sensor

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  12. Manufacturing: 3D printed micro-optics

    Juodkazis, Saulius

    2016-08-01

    Uncompromised performance of micro-optical compound lenses has been achieved by high-fidelity shape definition during two-photon absorption microfabrication. The lenses have been made directly onto image sensors and even onto the tip of an optic fibre.

  13. ICER-3D Hyperspectral Image Compression Software

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  14. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    Yakang Dai; Jian Zheng; Yuetao Yang; Duojie Kuai; Xiaodong Yang

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume render...

  15. Acquisition and applications of 3D images

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  16. Heat Equation to 3D Image Segmentation

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  17. 3D camera tracking from disparity images

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  18. Full-field optical deformation measurement in biomechanics: digital speckle pattern interferometry and 3D digital image correlation applied to bird beaks.

    Soons, Joris; Lava, Pascal; Debruyne, Dimitri; Dirckx, Joris

    2012-10-01

    In this paper two easy-to-use optical setups for the validation of biomechanical finite element (FE) models are presented. First, we show an easy-to-build Michelson digital speckle pattern interferometer (DSPI) setup, yielding the out-of-plane displacement. We also introduce three-dimensional digital image correlation (3D-DIC), a stereo photogrammetric technique. Both techniques are non-contact and full field, but they differ in nature and have different magnitudes of sensitivity. In this paper we successfully apply both techniques to validate a multi-layered FE model of a small bird beak, a strong but very light biological composite. DSPI can measure very small deformations, with potentially high signal-to-noise ratios. Its high sensitivity, however, results in high stability requirements and makes it hard to use it outside an optical laboratory and on living samples. In addition, large loads have to be divided into small incremental load steps to avoid phase unwrapping errors and speckle de-correlation. 3D-DIC needs much larger displacements, but automatically yields the strains. It is more flexible, does not have stability requirements, and can easily be used as an optical strain gage. PMID:23026697

  19. Visualization of 3D optical lattices

    Lee, Hoseong; Clemens, James

    2016-05-01

    We describe the visualization of 3D optical lattices based on Sisyphus cooling implemented with open source software. We plot the adiabatic light shift potentials found by diagonalizing the effective Hamiltonian for the light shift operator. Our program incorporates a variety of atomic ground state configurations with total angular momentum ranging from j = 1 / 2 to j = 4 and a variety of laser beam configurations including the two-beam lin ⊥ lin configuration, the four-beam umbrella configuration, and four beams propagating in two orthogonal planes. In addition to visualizing the lattice the program also evaluates lattice parameters such as the oscillation frequency for atoms trapped deep in the wells. The program is intended to help guide experimental implementations of optical lattices.

  20. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    Cambridge : The Electromagnetics Academy, 2010, s. 1043-1046. ISBN 978-1-934142-14-1. [PIERS 2010 Cambridge. Cambridge (US), 05.07.2010-08.07.2010] R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : 3D reconstruction * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  1. Feasibility of 3D harmonic contrast imaging

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; Cate, ten F.; Jong, de N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suit

  2. 3D Membrane Imaging and Porosity Visualization

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  3. 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping of......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....... treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  4. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  5. Metrological characterization of 3D imaging devices

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  6. 3D IMAGING USING COHERENT SYNCHROTRON RADIATION

    Peter Cloetens

    2011-05-01

    Full Text Available Three dimensional imaging is becoming a standard tool for medical, scientific and industrial applications. The use of modem synchrotron radiation sources for monochromatic beam micro-tomography provides several new features. Along with enhanced signal-to-noise ratio and improved spatial resolution, these include the possibility of quantitative measurements, the easy incorporation of special sample environment devices for in-situ experiments, and a simple implementation of phase imaging. These 3D approaches overcome some of the limitations of 2D measurements. They require new tools for image analysis.

  7. 3D Model Assisted Image Segmentation

    Jayawardena, Srimal; Hutter, Marcus

    2012-01-01

    The problem of segmenting a given image into coherent regions is important in Computer Vision and many industrial applications require segmenting a known object into its components. Examples include identifying individual parts of a component for process control work in a manufacturing plant and identifying parts of a car from a photo for automatic damage detection. Unfortunately most of an object's parts of interest in such applications share the same pixel characteristics, having similar colour and texture. This makes segmenting the object into its components a non-trivial task for conventional image segmentation algorithms. In this paper, we propose a "Model Assisted Segmentation" method to tackle this problem. A 3D model of the object is registered over the given image by optimising a novel gradient based loss function. This registration obtains the full 3D pose from an image of the object. The image can have an arbitrary view of the object and is not limited to a particular set of views. The segmentation...

  8. Micromachined Ultrasonic Transducers for 3-D Imaging

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... capable of producing 62+62-element row-column addressed CMUT arrays with negligible charging issues. The arrays include an integrated apodization, which reduces the ghost echoes produced by the edge waves in such arrays by 15:8 dB. The acoustical cross-talk is measured on fabricated arrays, showing a 24 d...

  9. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  10. 3D Human cartilage surface characterization by optical coherence tomography

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D

  11. 3-D SAR image formation from sparse aperture data using 3-D target grids

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  12. 3D Buildings Extraction from Aerial Images

    Melnikova, O.; Prandi, F.

    2011-09-01

    This paper introduces a semi-automatic method for buildings extraction through multiple-view aerial image analysis. The advantage of the used semi-automatic approach is that it allows processing of each building individually finding the parameters of buildings features extraction more precisely for each area. On the early stage the presented technique uses an extraction of line segments that is done only inside of areas specified manually. The rooftop hypothesis is used further to determine a subset of quadrangles, which could form building roofs from a set of extracted lines and corners obtained on the previous stage. After collecting of all potential roof shapes in all images overlaps, the epipolar geometry is applied to find matching between images. This allows to make an accurate selection of building roofs removing false-positive ones and to identify their global 3D coordinates given camera internal parameters and coordinates. The last step of the image matching is based on geometrical constraints in contrast to traditional correlation. The correlation is applied only in some highly restricted areas in order to find coordinates more precisely, in such a way significantly reducing processing time of the algorithm. The algorithm has been tested on a set of Milan's aerial images and shows highly accurate results.

  13. Precision 3-D microscopy with intensity modulated fibre optic scanners

    Olmos, P.

    2016-01-01

    Optical 3-D imagers constitute a family of precision and useful instruments, easily available on the market in a wide variety of configurations and performances. However, besides their cost they usually provide an image of the object (i.e. a more or less faithful representation of the reality) instead of a truly object's reconstruction. Depending on the detailed working principles of the equipment, this reconstruction may become a challenging task. Here a very simple yet reliable device is described; it is able to form images of opaque objects by illuminating them with an optical fibre and collecting the reflected light with another fibre. Its 3-D capability comes from the spatial filtering imposed by the fibres together with their movement (scanning) along the three directions: transversal (surface) and vertical. This unsophisticated approach allows one to model accurately the entire optical process and to perform the desired reconstruction, finding that information about the surface which is of interest: its profile and its reflectance, ultimately related to the type of material.

  14. Photogrammetric 3D reconstruction using mobile imaging

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  15. The Atlas-3D project - IX. The merger origin of a fast and a slow rotating Early-Type Galaxy revealed with deep optical imaging: first results

    Duc, Pierre-Alain; Serra, Paolo; Michel-Dansac, Leo; Ferriere, Etienne; Alatalo, Katherine; Blitz, Leo; Bois, Maxime; Bournaud, Frederic; Bureau, Martin; Cappellari, Michele; Davies, Roger L; Davis, Timothy A; de Zeeuw, P T; Emsellem, Eric; Khochfar, Sadegh; Krajnovic, Davor; Kuntschner, Harald; Lablanche, Pierre-Yves; McDermid, Richard M; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Sarzi, Marc; Scott, Nicholas; Weijmans, Anne-Marie; Young, Lisa M

    2011-01-01

    The mass assembly of galaxies leaves imprints in their outskirts, such as shells and tidal tails. The frequency and properties of such fine structures depend on the main acting mechanisms - secular evolution, minor or major mergers - and on the age of the last substantial accretion event. We use this to constrain the mass assembly history of two apparently relaxed nearby Early-Type Galaxies (ETGs) selected from the Atlas-3D sample, NGC 680 and NGC 5557. Our ultra deep optical images obtained with MegaCam on the Canada-France-Hawaii Telescope reach 29 mag/arcsec^2 in the g-band. They reveal very low-surface brightness (LSB) filamentary structures around these ellipticals. Among them, a gigantic 160 kpc long tail East of NGC 5557 hosts gas-rich star-forming objects. NGC 680 exhibits two major diffuse plumes apparently connected to extended HI tails, as well as a series of arcs and shells. Comparing the outer stellar and gaseous morphology of the two ellipticals with that predicted from models of colliding galax...

  16. 3D Image Reconstruction from Compton camera data

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  17. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    Li, X. W.; Kim, D. H.; Cho, S. J.; Kim, S. T.

    2013-01-01

    A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII) and linear-complemented maximum- length cellular automata (LC-MLCA) to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA) recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-...

  18. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  19. Optical 3-D-measurement techniques : a survey

    Tiziani, Hans J.

    1989-01-01

    Close range photogrammetry will be more frequently applied in industry for 3-D-sensing when real time processing can be applied. Computer vision, machine vision, robot vision are in fact synonymous with real time photogrammetry. This overview paper concentrates on optical methods for 3-D-measurements. Incoherent and coherent methods for 3-D-sensing will be presented. Particular emphasis is put on high precision 3-D-measurements. Some of the work of our laboratory will be reported.

  20. Progress in 3D imaging and display by integral imaging

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  1. Perception of detail in 3D images

    Heyndrickx, I.; Kaptein, R.

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads t

  2. Advanced 3D-reconstruction of biological specimen monitored by non-invasive optical tomography

    Imaging of intricate and delicate subcellular structures along with reliable 3D-reconstruction of cells and tissues may be achieved on the basis of confocal laser scanning microscopy (optical tomography) provided that certain criteria such as proper loading of fluorescent dyes, image acquisition under defined electro-optical conditions, suitable image pre- and postprocessing, etc., are taken into account prior to volume- or surface-rendering for 3D-visualization. (author)

  3. Automatic 3-D Optical Detection on Orientation of Randomly Oriented Industrial Parts for Rapid Robotic Manipulation

    Liang-Chia Chen; Manh-Trung Le; Xuan-Loc Nguyen

    2012-01-01

    This paper proposes a novel method employing a developed 3-D optical imaging and processing algorithm for accurate classification of an object’s surface characteristics in robot pick and place manipulation. In the method, 3-D geometry of industrial parts can be rapidly acquired by the developed one-shot imaging optical probe based on Fourier Transform Profilometry (FTP) by using digital-fringe projection at a camera’s maximum sensing speed. Following this, the acquired range image can be effe...

  4. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  5. Combining different modalities for 3D imaging of biological objects

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  6. Combining Different Modalities for 3D Imaging of Biological Objects

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  7. Optical experiments on 3D photonic crystals

    Koenderink, F.; Vos, W.

    2003-01-01

    Photonic crystals are optical materials that have an intricate structure with length scales of the order of the wavelength of light. The flow of photons is controlled in a manner analogous to how electrons propagate through semiconductor crystals, i.e., by Bragg diffraction and the formation of band

  8. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  9. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  10. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  11. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    X. W. Li

    2013-08-01

    Full Text Available A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII and linear-complemented maximum- length cellular automata (LC-MLCA to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-MLCA algorithm. When decrypting the encrypted image, the 2-D EIA is recovered by the LC-MLCA. Using the computational integral imaging reconstruction (CIIR technique and a 3-D object is subsequently reconstructed on the output plane from the 2-D recovered EIA. Because the 2-D EIA is composed of a number of elemental images having their own perspectives of a 3-D image, even if the encrypted image is seriously harmed, the 3-D image can be successfully reconstructed only with partial data. To verify the usefulness of the proposed algorithm, we perform computational experiments and present the experimental results for various attacks. The experiments demonstrate that the proposed encryption method is valid and exhibits strong robustness and security.

  12. Performance assessment of 3D surface imaging technique for medical imaging applications

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  13. Automatic structural matching of 3D image data

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  14. The Atlas3D project -- XXIX. The new look of early-type galaxies and surrounding fields disclosed by extremely deep optical images

    Duc, Pierre-Alain; Karabal, Emin; Cappellari, Michele; Alatalo, Katherine; Blitz, Leo; Bournaud, Frederic; Bureau, Martin; Crocker, Alison F; Davies, Roger L; Davis, Timothy A; de Zeeuw, P T; Emsellem, Eric; Khochfar, Sadegh; Krajnovic, Davor; Kuntschner, Harald; McDermid, Richard M; Michel-Dansac, Leo; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Paudel, Sanjaya; Sarzi, Marc; Scott, Nicholas; Serra, Paolo; Weijmans, Anne-Marie; Young, Lisa M

    2014-01-01

    Galactic archeology based on star counts is instrumental to reconstruct the past mass assembly of Local Group galaxies. The development of new observing techniques and data-reduction, coupled with the use of sensitive large field of view cameras, now allows us to pursue this technique in more distant galaxies exploiting their diffuse low surface brightness (LSB) light. As part of the Atlas3D project, we have obtained with the MegaCam camera at the Canada-France Hawaii Telescope extremely deep, multi--band, images of nearby early-type galaxies. We present here a catalog of 92 galaxies from the Atlas3D sample, that are located in low to medium density environments. The observing strategy and data reduction pipeline, that achieve a gain of several magnitudes in the limiting surface brightness with respect to classical imaging surveys, are presented. The size and depth of the survey is compared to other recent deep imaging projects. The paper highlights the capability of LSB--optimized surveys at detecting new pr...

  15. Parallel Processor for 3D Recovery from Optical Flow

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  16. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  17. A 3D Optical Metamaterial Made by Self-Assembly

    Vignolini, Silvia

    2011-10-24

    Optical metamaterials have unusual optical characteristics that arise from their periodic nanostructure. Their manufacture requires the assembly of 3D architectures with structure control on the 10-nm length scale. Such a 3D optical metamaterial, based on the replication of a self-assembled block copolymer into gold, is demonstrated. The resulting gold replica has a feature size that is two orders of magnitude smaller than the wavelength of visible light. Its optical signature reveals an archetypal Pendry wire metamaterial with linear and circular dichroism. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A 3D image analysis tool for SPECT imaging

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  19. Towards magnetic 3D x-ray imaging

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  20. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  1. Development of 3D microwave imaging reflectometry in LHD (invited).

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  2. 3D Imaging with Structured Illumination for Advanced Security Applications

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  3. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas; Bai, Li

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized...

  4. 3D augmented reality with integral imaging display

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  5. 3D Interpolation Method for CT Images of the Lung

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  6. Fiber optic coherent laser radar 3D vision system

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R. [Coleman Research Corp., Springfield, VA (United States); Wagner, K.; Weaver, S.; Xu, Jieping [Colorado Univ., Boulder, CO (United States)

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  7. Fiber optic coherent laser radar 3D vision system

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  8. Monolens 3-D Imaging and Measurement System

    Hošek, Jan

    Praha : Czech Technical University, 2006 - (Říha, B.), s. 464-465 ISBN 80-01-03439-9. - (CTU reports. vol. 10). [CTU Reports Workshop 2006. Praha (CZ), 20.02.2006-24.02.2006] Institutional research plan: CEZ:AV0Z20760514 Keywords : measurement * anamorphot * sphere Subject RIV: BH - Optics, Masers, Lasers

  9. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume; Dufait, Remi; Jensen, Jørgen Arendt

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. ...

  10. Preliminary examples of 3D vector flow imaging

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev;

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental...... ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal...... acquisition as opposed to magnetic resonance imaging (MRI). The results demonstrate that the 3D TO method is capable of performing 3D vector flow imaging....

  11. Highway 3D model from image and lidar data

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  12. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  13. Holographic Image Plane Projection Integral 3D Display

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  14. 3-D capacitance density imaging system

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  15. 3D imaging of neutron tracks using confocal microscopy

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  16. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  17. Modification of a Colliculo-thalamocortical Mouse Brain Slice, Incorporating 3-D printing of Chamber Components and Multi-scale Optical Imaging.

    Slater, Bernard J; Fan, Anthony Y; Stebbings, Kevin A; Saif, M Taher A; Llano, Daniel A

    2015-01-01

    The ability of the brain to process sensory information relies on both ascending and descending sets of projections. Until recently, the only way to study these two systems and how they interact has been with the use of in vivo preparations. Major advances have been made with acute brain slices containing the thalamocortical and cortico-thalamic pathways in the somatosensory, visual, and auditory systems. With key refinements to our recent modification of the auditory thalamocortical slice(1), we are able to more reliably capture the projections between most of the major auditory midbrain and forebrain structures: the inferior colliculus (IC), medial geniculate body (MGB), thalamic reticular nucleus (TRN), and the auditory cortex (AC). With portions of all these connections retained, we are able to answer detailed questions that complement the questions that can be answered with in vivo preparations. The use of flavoprotein autofluorescence imaging enables us to rapidly assess connectivity in any given slice and guide the ensuing experiment. Using this slice in conjunction with recording and imaging techniques, we are now better equipped to understand how information processing occurs at each point in the auditory forebrain as information ascends to the cortex, and the impact of descending cortical modulation. 3-D printing to build slice chamber components permits double-sided perfusion and broad access to networks within the slice and maintains the widespread connections key to fully utilizing this preparation. PMID:26437382

  18. An optical real-time 3D measurement for analysis of facial shape and movement

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  19. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    2010-01-01

    Roč. 6, č. 7 (2010), s. 617-620. ISSN 1931-7360 R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : reconstruction methods * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  20. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  1. Acoustic 3D imaging of dental structures

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  2. Imaging 3D strain field monitoring during hydraulic fracturing processes

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  3. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  4. 3D optical manipulation of a single electron spin

    Geiselmann, Michael; Renger, Jan; Say, Jana M; Brown, Louise J; de Abajo, F Javier García; Koppens, Frank; Quidant, Romain

    2013-01-01

    Nitrogen vacancy (NV) centers in diamond are promising elemental blocks for quantum optics [1, 2], spin-based quantum information processing [3, 4], and high-resolution sensing [5-13]. Yet, fully exploiting these capabilities of single NV centers requires strategies to accurately manipulate them. Here, we use optical tweezers as a tool to achieve deterministic trapping and 3D spatial manipulation of individual nano-diamonds hosting a single NV spin. Remarkably, we find the NV axis is nearly fixed inside the trap and can be controlled in-situ, by adjusting the polarization of the trapping light. By combining this unique spatial and angular control with coherent manipulation of the NV spin and fluorescent lifetime measurements near an integrated photonic system, we prove optically trapped NV center as a novel route for both 3D vectorial magnetometry and sensing of the local density of optical states.

  5. A 3D Model Reconstruction Method Using Slice Images

    LI Hong-an; KANG Bao-sheng

    2013-01-01

    Aiming at achieving the high accuracy 3D model from slice images, a new model reconstruction method using slice im-ages is proposed. Wanting to extract the outermost contours from slice images, the method of the improved GVF-Snake model with optimized force field and ray method is employed. And then, the 3D model is reconstructed by contour connection using the im-proved shortest diagonal method and judgment function of contour fracture. The results show that the accuracy of reconstruction 3D model is improved.

  6. 3D Motion Parameters Determination Based on Binocular Sequence Images

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  7. Morphometrics, 3D Imaging, and Craniofacial Development.

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  8. ROIC for gated 3D imaging LADAR receiver

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  9. Software for 3D diagnostic image reconstruction and analysis

    Recent advances in computer technologies have opened new frontiers in medical diagnostics. Interesting possibilities are the use of three-dimensional (3D) imaging and the combination of images from different modalities. Software prepared in our laboratories devoted to 3D image reconstruction and analysis from computed tomography and ultrasonography is presented. In developing our software it was assumed that it should be applicable in standard medical practice, i.e. it should work effectively with a PC. An additional feature is the possibility of combining 3D images from different modalities. The reconstruction and data processing can be conducted using a standard PC, so low investment costs result in the introduction of advanced and useful diagnostic possibilities. The program was tested on a PC using DICOM data from computed tomography and TIFF files obtained from a 3D ultrasound system. The results of the anthropomorphic phantom and patient data were taken into consideration. A new approach was used to achieve spatial correlation of two independently obtained 3D images. The method relies on the use of four pairs of markers within the regions under consideration. The user selects the markers manually and the computer calculates the transformations necessary for coupling the images. The main software feature is the possibility of 3D image reconstruction from a series of two-dimensional (2D) images. The reconstructed 3D image can be: (1) viewed with the most popular methods of 3D image viewing, (2) filtered and processed to improve image quality, (3) analyzed quantitatively (geometrical measurements), and (4) coupled with another, independently acquired 3D image. The reconstructed and processed 3D image can be stored at every stage of image processing. The overall software performance was good considering the relatively low costs of the hardware used and the huge data sets processed. The program can be freely used and tested (source code and program available at

  10. BM3D Frames and Variational Image Deblurring

    Danielyan, Aram; Egiazarian, Karen

    2011-01-01

    A family of the Block Matching 3-D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patch-wise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem: one given by minimization of the single objective function and another based on the Nash equilibrium balance of two objective functions. The latter results in an algorithm where the denoising and deblurring operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the Nash equilibrium formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool.

  11. Image based 3D city modeling : Comparative study

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  12. 3D imaging of aortic aneurysma using spiral CT

    The use of 3D reconstructions (3D display technique and maximum intensity projection) in spiral CT for diagnostic evaluation of aortic aneurysma is explained. The data available showing 12 aneurysma of the abdominal and thoracic aorta (10 cases of aneurysma verum, 2 cases of aneurysma dissecans) were selected for verification of the value of 3D images in comparison to transversal displays of the CT. The 3D reconstructions of the spiral CT, other than the projection angiography, give insight into the vessel from various points of view. Such information is helpful for quickly gathering a picture of the volume and contours of a pathological process in the vessel. 3D post-processing of data is advisable if the comparison of tomograms and projection images produces findings of nuclear definition which need clarification prior to surgery. (orig.)

  13. 3D optical measuring technologies for dimensional inspection

    The results of the R and D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method, development of hole inspection method on the base of diffractive optical elements. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability takes a noncontact inspection of geometrical parameters of their components. For this tasks we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFILE, and technologies for non-contact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic system COMPLEX for noncontact inspection of geometrical parameters of running freight car wheel pairs. The performances of these systems and the results of the industrial testing at atomic and railway companies are presented

  14. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  15. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  16. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  17. Implementation of 3D Optical Scanning Technology for Automotive Applications.

    Kuş, Abdil

    2009-01-01

    Reverse engineering (RE) is a powerful tool for generating a CAD model from the 3D scan data of a physical part that lacks documentation or has changed from the original CAD design of the part. The process of digitizing a part and creating a CAD model from 3D scan data is less time consuming and provides greater accuracy than manually measuring the part and designing the part from scratch in CAD. 3D optical scanning technology is one of the measurement methods which have evolved over the last few years and it is used in a wide range of areas from industrial applications to art and cultural heritage. It is also used extensively in the automotive industry for applications such as part inspections, scanning of tools without CAD definition, scanning the casting for definition of the stock (i.e. the amount of material to be removed from the surface of the castings) model for CAM programs and reverse engineering. In this study two scanning experiments of automotive applications are illustrated. The first one examines the processes from scanning to re-manufacturing the damaged sheet metal cutting die, using a 3D scanning technique and the second study compares the scanned point clouds data to 3D CAD data for inspection purposes. Furthermore, the deviations of the part holes are determined by using different lenses and scanning parameters. PMID:22573995

  18. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume;

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix...... phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique...... cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels...

  19. Advanced 3-D Ultrasound Imaging.:3-D Synthetic Aperture Imaging and Row-column Addressing of 2-D Transducer Arrays

    Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-01-01

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinic...

  20. Constructing 3D microtubule networks using holographic optical trapping

    Bergman, J.; Osunbayo, O.; Vershinin, M.

    2015-01-01

    Developing abilities to assemble nanoscale structures is a major scientific and engineering challenge. We report a technique which allows precise positioning and manipulation of individual rigid filaments, enabling construction of custom-designed 3D filament networks. This approach uses holographic optical trapping (HOT) for nano-positioning and microtubules (MTs) as network building blocks. MTs are desirable engineering components due to their high aspect ratio, rigidity, and their ability t...

  1. Open-source 3D-printable optics equipment.

    Chenlong Zhang

    Full Text Available Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  2. Fiber optic coherent laser radar 3d vision system

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  3. Fiber optic coherent laser radar 3d vision system

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L. [and others

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  4. Recovering 3D human pose from monocular images

    Agarwal, Ankur; Triggs, Bill

    2006-01-01

    We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We eva...

  5. 3D Medical Image Segmentation Based on Rough Set Theory

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  6. 3D Image Display Courses for Information Media Students.

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  7. A near field 3D radar imaging technique

    Broquetas Ibars, Antoni

    1993-01-01

    The paper presents an algorithm which recovers a 3D reflectivity image of a target from near-field scattering measurements. Spherical wave nearfield illumination is used, in order to avoid a costly compact range installation to produce a plane wave illumination. The system is described and some simulated 3D reconstructions are included. The paper also presents a first experimental validation of this technique. Peer Reviewed

  8. 3-D Imaging Systems for Agricultural Applications—A Review

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  9. 3-D Imaging Systems for Agricultural Applications-A Review.

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  10. 3-D Imaging Systems for Agricultural Applications—A Review

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  11. Investigation of the feasability for 3D synthetic aperture imaging

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2003-01-01

    This paper investigates the feasibility of implementing real-time synthetic aperture 3D imaging on the experimental system developed at the Center for Fast Ultrasound Imaging using a 2D transducer array. The target array is a fully populated 32 × 32 3 MHz array with a half wavelength pitch. The...

  12. Hybrid segmentation framework for 3D medical image analysis

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  13. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  14. 3D Tongue Motion from Tagged and Cine MR Images

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z.; Lee, Junghoon; Stone, Maureen; Prince, Jerry L.

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach su ers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information...

  15. New methods for optical distance indicator and gantry angle quality control tests in medical linear accelerators: image processing by using a 3D phantom

    Shandiz, Mahdi Heravian; Khalilzadeh, Mohammadmahdi; Anvari, Kazem [Mashhad Branch, Islamic Azad University, Mashhad (Iran, Islamic Republic of); Layen, Ghorban Safaeian [Mashhad University of Medical Science, Mashhad (Iran, Islamic Republic of)

    2015-03-15

    In order to keep the acceptable level of the radiation oncology linear accelerators, it is necessary to apply a reliable quality assurance (QA) program. The QA protocols, published by authoritative organizations, such as the American Association of Physicists in Medicine (AAPM), determine the quality control (QC) tests which should be performed on the medical linear accelerators and the threshold levels for each test. The purpose of this study is to increase the accuracy and precision of the selected QC tests in order to increase the quality of treatment and also increase the speed of the tests to convince the crowded centers to start a reliable QA program. A new method has been developed for two of the QC tests; optical distance indicator (ODI) QC test as a daily test and gantry angle QC test as a monthly test. This method uses an image processing approach utilizing the snapshots taken by the CCD camera to measure the source to surface distance (SSD) and gantry angle. The new method of ODI QC test has an accuracy of 99.95% with a standard deviation of 0.061 cm and the new method for gantry angle QC has a precision of 0.43 degrees. The automated proposed method which is used for both ODI and gantry angle QC tests, contains highly accurate and precise results which are objective and the human-caused errors have no effect on the results. The results show that they are in the acceptable range for both of the QC tests, according to AAPM task group 142.

  16. New methods for optical distance indicator and gantry angle quality control tests in medical linear accelerators: image processing by using a 3D phantom

    In order to keep the acceptable level of the radiation oncology linear accelerators, it is necessary to apply a reliable quality assurance (QA) program. The QA protocols, published by authoritative organizations, such as the American Association of Physicists in Medicine (AAPM), determine the quality control (QC) tests which should be performed on the medical linear accelerators and the threshold levels for each test. The purpose of this study is to increase the accuracy and precision of the selected QC tests in order to increase the quality of treatment and also increase the speed of the tests to convince the crowded centers to start a reliable QA program. A new method has been developed for two of the QC tests; optical distance indicator (ODI) QC test as a daily test and gantry angle QC test as a monthly test. This method uses an image processing approach utilizing the snapshots taken by the CCD camera to measure the source to surface distance (SSD) and gantry angle. The new method of ODI QC test has an accuracy of 99.95% with a standard deviation of 0.061 cm and the new method for gantry angle QC has a precision of 0.43 degrees. The automated proposed method which is used for both ODI and gantry angle QC tests, contains highly accurate and precise results which are objective and the human-caused errors have no effect on the results. The results show that they are in the acceptable range for both of the QC tests, according to AAPM task group 142.

  17. Imaging system for creating 3D block-face cryo-images of whole mice

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  18. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  19. Characterization of a parallel-beam CCD optical-CT apparatus for 3D radiation dosimetry

    Krstajic, Nikola; Doran, Simon J.

    2007-07-01

    3D measurement of optical attenuation is of interest in a variety of fields of biomedical importance, including spectrophotometry, optical projection tomography (OPT) and analysis of 3D radiation dosimeters. Accurate, precise and economical 3D measurements of optical density (OD) are a crucial step in enabling 3D radiation dosimeters to enter wider use in clinics. Polymer gels and Fricke gels, as well as dosimeters not based around gels, have been characterized for 3D dosimetry over the last two decades. A separate problem is the verification of the best readout method. A number of different imaging modalities (magnetic resonance imaging (MRI), optical CT, x-ray CT and ultrasound) have been suggested for the readout of information from 3D dosimeters. To date only MRI and laser-based optical CT have been characterized in detail. This paper describes some initial steps we have taken in establishing charge coupled device (CCD)-based optical CT as a viable alternative to MRI for readout of 3D radiation dosimeters. The main advantage of CCD-based optical CT over traditional laser-based optical CT is a speed increase of at least an order of magnitude, while the simplicity of its architecture would lend itself to cheaper implementation than both MRI and laser-based optical CT if the camera itself were inexpensive enough. Specifically, we study the following aspects of optical metrology, using high quality test targets: (i) calibration and quality of absorbance measurements and the camera requirements for 3D dosimetry; (ii) the modulation transfer function (MTF) of individual projections; (iii) signal-to-noise ratio (SNR) in the projection and reconstruction domains; (iv) distortion in the projection domain, depth-of-field (DOF) and telecentricity. The principal results for our current apparatus are as follows: (i) SNR of optical absorbance in projections is better than 120:1 for uniform phantoms in absorbance range 0.3 to 1.6 (and better than 200:1 for absorbances 1.0 to

  20. Non-invasive single-shot 3D imaging through a scattering layer using speckle interferometry

    Somkuwar, Atul S; R., Vinu; Park, Yongkeun; Singh, Rakesh Kumar

    2015-01-01

    Optical imaging through complex scattering media is one of the major technical challenges with important applications in many research fields, ranging from biomedical imaging, astronomical telescopy, and spatially multiplex optical communications. Although various approaches for imaging though turbid layer have been recently proposed, they had been limited to two-dimensional imaging. Here we propose and experimentally demonstrate an approach for three-dimensional single-shot imaging of objects hidden behind an opaque scattering layer. We demonstrate that under suitable conditions, it is possible to perform the 3D imaging to reconstruct the complex amplitude of objects situated at different depths.

  1. 3D OPTICAL AND IR SPECTROSCOPY OF EXCEPTIONAL HII GALAXIES

    E. Telles

    2009-01-01

    Full Text Available In this contribution I will very brie y summarize some recent results obtained applying 3D spectroscopy to observations of the well known HII galaxy II Zw 40, both in the optical and near-IR. I have studied the distribution of the dust in the starburst region, the velocity and velocity dispersion, and the geometry of the molecular hydrogen and ionized gas. I found a clear correlation between the component of the ISM and the velocity eld suggesting that the latter has a fundamental role in de ning the modes of the star formation process.

  2. Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)

    The purpose of this study was to evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution 'cranial nerve imaging', which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region. (author)

  3. DART : a 3D model for remote sensing images and radiative budget of earth surfaces

    Gastellu-Etchegorry, J.P.; Grau, E.; Lauret, N.

    2012-01-01

    Modeling the radiative behavior and the energy budget of land surfaces is relevant for many scientific domains such as the study of vegetation functioning with remotely acquired information. DART model (Discrete Anisotropic Radiative Transfer) is developed since 1992. It is one of the most complete 3D models in this domain. It simulates radiative transfer (R.T.) in the optical domain: 3D radiative budget and remote sensing images (i.e., radiance, reflectance, brightness temperature) of vegeta...

  4. 3D interfractional patient position verification using 2D-3D registration of orthogonal images

    Reproducible positioning of the patient during fractionated external beam radiation therapy is imperative to ensure that the delivered dose distribution matches the planned one. In this paper, we expand on a 2D-3D image registration method to verify a patient's setup in three dimensions (rotations and translations) using orthogonal portal images and megavoltage digitally reconstructed radiographs (MDRRs) derived from CT data. The accuracy of 2D-3D registration was improved by employing additional image preprocessing steps and a parabolic fit to interpolate the parameter space of the cost function utilized for registration. Using a humanoid phantom, precision for registration of three-dimensional translations was found to be better than 0.5 mm (1 s.d.) for any axis when no rotations were present. Three-dimensional rotations about any axis were registered with a precision of better than 0.2 deg. (1 s.d.) when no translations were present. Combined rotations and translations of up to 4 deg. and 15 mm were registered with 0.4 deg. and 0.7 mm accuracy for each axis. The influence of setup translations on registration of rotations and vice versa was also investigated and mostly agrees with a simple geometric model. Additionally, the dependence of registration accuracy on three cost functions, angular spacing between MDRRs, pixel size, and field-of-view, was examined. Best results were achieved by mutual information using 0.5 deg. angular spacing and a 10x10 cm2 field-of-view with 140x140 pixels. Approximating patient motion as rigid transformation, the registration method is applied to two treatment plans and the patients' setup errors are determined. Their magnitude was found to be ≤6.1 mm and ≤2.7 deg. for any axis in all of the six fractions measured for each treatment plan

  5. Automated curved planar reformation of 3D spine images

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  6. DICOM for quantitative imaging research in 3D Slicer

    Fedorov, Andrey; Kikinis, Ron

    2014-01-01

    These are the slides presented by Andrey Fedorov at the 3D Slicer workshop and meeting of the Quantitative Image Informatics for Cancer Research (QIICR) project that took place November 18-19, 2014, at the University of Iowa.

  7. Practical pseudo-3D registration for large tomographic images

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  8. 3D wavefront image formation for NIITEK GPR

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  9. Design and Characterization of a Current Assisted Photo Mixing Demodulator for Tof Based 3d Cmos Image Sensor

    Hossain, Quazi Delwar

    2010-01-01

    Due to the increasing demand for 3D vision systems, many efforts have been recently concentrated to achieve complete 3D information analogous to human eyes. Scannerless optical range imaging systems are emerging as an interesting alternative to conventional intensity imaging in a variety of applications, including pedestrian security, biomedical appliances, robotics and industrial control etc. For this, several studies have reported to produce 3D images including stereovision, object distance...

  10. Extracting 3D Layout From a Single Image Using Global Image Structures

    Z. Lou; T. Gevers; N. Hu

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  11. Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection

    This work proposes a novel approach to segmenting randomly stacked objects in unstructured 3D point clouds, which are acquired by a random-speckle 3D imaging system for the purpose of automated object detection and reconstruction. An innovative algorithm is proposed; it is based on a novel concept of 3D watershed segmentation and the strategies for resolving over-segmentation and under-segmentation problems. Acquired 3D point clouds are first transformed into a corresponding orthogonally projected depth map along the optical imaging axis of the 3D sensor. A 3D watershed algorithm based on the process of distance transformation is then performed to detect the boundary, called the edge dam, between stacked objects and thereby to segment point clouds individually belonging to two stacked objects. Most importantly, an object-matching algorithm is developed to solve the over- and under-segmentation problems that may arise during the watershed segmentation. The feasibility and effectiveness of the method are confirmed experimentally. The results reveal that the proposed method is a fast and effective scheme for the detection and reconstruction of a 3D object in a random stack of such objects. In the experiments, the precision of the segmentation exceeds 95% and the recall exceeds 80%. (paper)

  12. Holoscopic 3D image depth estimation and segmentation techniques

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  13. Efficient reconfigurable architectures for 3D medical image compression

    Afandi, Ahmad

    2010-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In thes...

  14. 3D refractive index measurements of special optical fibers

    Yan, Cheng; Huang, Su-Juan; Miao, Zhuang; Chang, Zheng; Zeng, Jun-Zhang; Wang, Ting-Yun

    2016-09-01

    A digital holographic microscopic chromatography-based approach with considerably improved accuracy, simplified configuration and performance stability is proposed to measure three dimensional refractive index of special optical fibers. Based on the approach, a measurement system is established incorporating a modified Mach-Zehnder interferometer and lab-developed supporting software for data processing. In the system, a phase projection distribution of an optical fiber is utilized to obtain an optimal digital hologram recorded by a CCD, and then an angular spectrum theory-based algorithm is adopted to extract the phase distribution information of an object wave. The rotation of the optic fiber enables the experimental measurements of multi-angle phase information. Based on the filtered back projection algorithm, a 3D refraction index of the optical fiber is thus obtained at high accuracy. To evaluate the proposed approach, both PANDA fibers and special elliptical optical fiber are considered in the system. The results measured in PANDA fibers agree well with those measured using S14 Refractive Index Profiler, which is, however, not suitable for measuring the property of a special elliptical fiber.

  15. An automated 3D reconstruction method of UAV images

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  16. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  17. 1024 pixels single photon imaging array for 3D ranging

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  18. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  19. Helical CT scanner - 3D imaging and CT fluoroscopy

    It has been over twenty years since the introduction of X-ray CT. In recent years, the topic of helical scanning has dominated the area of technical development. With helical scanning now being used routinely, the traditional concept of the X-ray CT as a device for obtaining axial images of the body in slices has given way to that of one for obtaining images in volumes. For instance, the ability of helical scanning to acquire sequential images in the direction of the body axis makes it ideal for creating three dimensional (3-D) images, and has in fact led to the use of 3-D images in clinical practice. In addition, with helical scanning, imaging of organs such as the liver or lung can be performed in several tens of seconds, as opposed to a few minutes that it used to take. This has resulted not only in reduced time for the patient to spend under constraint for imaging but also to changes in diagnostic methods. The question, 'Would it be possible to perform reconstruction while scanning and to see resulting images in real time ?' is another issue which has been taken up, and it has been answered by CT Fluoroscopy. It makes it possible to see CT images in real time during sequential scanning, and from this development, applications such as CT-guided biopsy and CT-navigated surgery has been investigated and have been realized. Other possibilities to create a whole new series of diagnostic methods and results. (author)

  20. 3D acoustic imaging applied to the Baikal neutrino telescope

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  1. 3D acoustic imaging applied to the Baikal neutrino telescope

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  2. Dynamic complex optical fields for optical manipulation, 3D microscopy, and photostimulation of neurotransmitters

    Daria, Vincent R.; Stricker, Christian; Bekkers, John; Redman, Steve; Bachor, Hans

    2010-08-01

    We demonstrate a multi-functional system capable of multiple-site two-photon excitation of photo-sensitive compounds as well as transfer of optical mechanical properties on an array of mesoscopic particles. We use holographic projection of a single Ti:Sapphire laser operating in femtosecond pulse mode to show that the projected three-dimensional light patterns have sufficient spatiotemporal photon density for multi-site two-photon excitation of biological fluorescent markers and caged neurotransmitters. Using the same laser operating in continuous-wave mode, we can use the same light patterns for non-invasive transfer of both linear and orbital angular momentum on a variety of mesoscopic particles. The system also incorporates high-speed scanning using acousto-optic modulators to rapidly render 3D images of neuron samples via two-photon microscopy.

  3. 3D CT Imaging Method for Measuring Temporal Bone Aeration

    Objective: 3D volume reconstruction of CT images can be used to measure temporal bene aeration. This study evaluates the technique with respect to reproducibility and acquisition parameters. Material and methods: Helical CT images acquired from patients with radiographically normal temporal bones using standard clinical protocols were retrospectively analyzed. 3D image reconstruction was performed to measure the volume of air within the temporal bone. The appropriate threshold values for air were determined from reconstruction of a phantom with a known air volume imaged using the same clinical protocols. The appropriate air threshold values were applied to the clinical material. Results: Air volume was measured according to an acquisition algorithm. The average volume in the temporal bone CT group was 5.56 ml, compared to 5.19 ml in the head CT group (p = 0.59). The correlation coefficient between examiners was > 0.92. There was a wide range of aeration volumes among individual ears (0.76-18.84 ml); however, paired temporal bones differed by an average of just 1.11 ml. Conclusions: The method of volume measurement from 3D reconstruction reported here is widely available, easy to perform and produces consistent results among examiners. Application of the technique to archival CT data is possible using corrections for air segmentation thresholds according to acquisition parameters

  4. Spectroscopy and 3D imaging of the Crab nebula

    Cadez, A; Vidrih, S

    2004-01-01

    Spectroscopy of the Crab nebula along different slit directions reveals the 3 dimensional structure of the optical nebula. On the basis of the linear radial expansion result first discovered by Trimble (1968), we make a 3D model of the optical emission. Results from a limited number of slit directions suggest that optical lines originate from a complicated array of wisps that are located in a rather thin shell, pierced by a jet. The jet is certainly not prominent in optical emission lines, but the direction of the piercing is consistent with the direction of the X-ray and radio jet. The shell's effective radius is ~ 79 seconds of arc, its thickness about a third of the radius and it is moving out with an average velocity 1160 km/s.

  5. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  6. Parsing optical scanned 3D data by Bayesian inference

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2015-10-01

    Optical devices are always used to digitize complex objects to get their shapes in form of point clouds. The results have no semantic meaning about the objects, and tedious process is indispensable to segment the scanned data to get meanings. The reason for a person to perceive an object correctly is the usage of knowledge, so Bayesian inference is used to the goal. A probabilistic And-Or-Graph is used as a unified framework of representation, learning, and recognition for a large number of object categories, and a probabilistic model defined on this And-Or-Graph is learned from a relatively small training set per category. Given a set of 3D scanned data, the Bayesian inference constructs a most probable interpretation of the object, and a semantic segment is obtained from the part decomposition. Some examples are given to explain the method.

  7. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed met...

  8. 3D Imaging of a Cavity Vacuum under Dissipation

    Lee, Moonjoo; Seo, Wontaek; Hong, Hyun-Gue; Song, Younghoon; Dasari, Ramachandra R; An, Kyungwon

    2013-01-01

    P. A. M. Dirac first introduced zero-point electromagnetic fields in order to explain the origin of atomic spontaneous emission. Since then, it has long been debated how the zero-point vacuum field is affected by dissipation. Here we report 3D imaging of vacuum fluctuations in a high-Q cavity and rms amplitude measurements of the vacuum field. The 3D imaging was done by the position-dependent emission of single atoms, resulting in dissipation-free rms amplitude of 0.97 +- 0.03 V/cm. The actual rms amplitude of the vacuum field at the antinode was independently determined from the onset of single-atom lasing at 0.86 +- 0.08 V/cm. Within our experimental accuracy and precision, the difference was noticeable, but it is not significant enough to disprove zero-point energy conservation.

  9. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  10. Automated Recognition of 3D Features in GPIR Images

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  11. 3D imaging of semiconductor components by discrete laminography

    Batenburg, Joost; Palenstijn, W.J.; Sijbers, J.

    2014-01-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the ...

  12. Improvements in quality and quantification of 3D PET images

    Rapisarda,

    2012-01-01

    The spatial resolution of Positron Emission Tomography is conditioned by several physical factors, which can be taken into account by using a global Point Spread Function (PSF). In this thesis a spatially variant (radially asymmetric) PSF implementation in the image space of a 3D Ordered Subsets Expectation Maximization (OSEM) algorithm is proposed. Two different scanners were considered, without and with Time Of Flight (TOF) capability. The PSF was derived by fitting some experimental...

  13. Super pipe lining system for 3-D CT imaging

    A new idea for 3-D CT image reconstruction system is introduced. For the network has very important improvement in recently years, it realizes that network computing replace the traditional serial system processing. CT system's works are carried in a multi-level fashion, it will make the tedious works processed by many computers linked by local network in the same time. So greatly improve the reconstruction speed

  14. 3D VSP imaging in the Deepwater GOM

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  15. Discrete Method of Images for 3D Radio Propagation Modeling

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  16. 3D reconstruction of multiple stained histology images

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  17. 3D tongue motion from tagged and cine MR images.

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  18. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  19. Automatic 3-D Optical Detection on Orientation of Randomly Oriented Industrial Parts for Rapid Robotic Manipulation

    Liang-Chia Chen

    2012-12-01

    Full Text Available This paper proposes a novel method employing a developed 3-D optical imaging and processing algorithm for accurate classification of an object’s surface characteristics in robot pick and place manipulation. In the method, 3-D geometry of industrial parts can be rapidly acquired by the developed one-shot imaging optical probe based on Fourier Transform Profilometry (FTP by using digital-fringe projection at a camera’s maximum sensing speed. Following this, the acquired range image can be effectively segmented into three surface types by classifying point clouds based on the statistical distribution of the normal surface vector of each detected 3-D point, and then the scene ground is reconstructed by applying least squares fitting and classification algorithms. Also, a recursive search process incorporating the region-growing algorithm for registering homogeneous surface regions has been developed. When the detected parts are randomly overlapped on a workbench, a group of defined 3-D surface features, such as surface areas, statistical values of the surface normal distribution and geometric distances of defined features, can be uniquely recognized for detection of the part’s orientation. Experimental testing was performed to validate the feasibility of the developed method for real robotic manipulation.

  20. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  1. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  2. Autonomous Planetary 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    A common task for many deep space missions is autonomous generation of 3-D representations of planetary surfaces onboard unmanned spacecrafts. The basic problem for this class of missions is, that the closed loop time is far too long. The closed loop time is defined as the time from when a human...... of seconds to a few minutes, the closed loop time effectively precludes active human control.The only way to circumvent this problem is to build an artificial feature extractor operating autonomously onboard the spacecraft.Different artificial feature extractors are presented and their efficiency...... is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  3. Fully automatic plaque segmentation in 3-D carotid ultrasound images.

    Cheng, Jieyu; Li, He; Xiao, Feng; Fenster, Aaron; Zhang, Xuming; He, Xiaoling; Li, Ling; Ding, Mingyue

    2013-12-01

    Automatic segmentation of the carotid plaques from ultrasound images has been shown to be an important task for monitoring progression and regression of carotid atherosclerosis. Considering the complex structure and heterogeneity of plaques, a fully automatic segmentation method based on media-adventitia and lumen-intima boundary priors is proposed. This method combines image intensity with structure information in both initialization and a level-set evolution process. Algorithm accuracy was examined on the common carotid artery part of 26 3-D carotid ultrasound images (34 plaques ranging in volume from 2.5 to 456 mm(3)) by comparing the results of our algorithm with manual segmentations of two experts. Evaluation results indicated that the algorithm yielded total plaque volume (TPV) differences of -5.3 ± 12.7 and -8.5 ± 13.8 mm(3) and absolute TPV differences of 9.9 ± 9.5 and 11.8 ± 11.1 mm(3). Moreover, high correlation coefficients in generating TPV (0.993 and 0.992) between algorithm results and both sets of manual results were obtained. The automatic method provides a reliable way to segment carotid plaque in 3-D ultrasound images and can be used in clinical practice to estimate plaque measurements for management of carotid atherosclerosis. PMID:24063959

  4. 3D-imaging using micro-PIXE

    Ishii, K.; Matsuyama, S.; Watanabe, Y.; Kawamura, Y.; Yamaguchi, T.; Oyama, R.; Momose, G.; Ishizaki, A.; Yamazaki, H.; Kikuchi, Y.

    2007-02-01

    We have developed a 3D-imaging system using characteristic X-rays produced by proton micro-beam bombardment. The 3D-imaging system consists of a micro-beam and an X-ray CCD camera of 1 mega pixels (Hamamatsu photonics C8800X), and has a spatial resolution of 4 μm by using characteristic Ti-K-X-rays (4.558 keV) produced by 3 MeV protons of beam spot size of ˜1 μm. We applied this system, namely, a micron-CT to observe the inside of a living small ant's head of ˜1 mm diameter. An ant was inserted into a small polyimide tube the inside diameter and the wall thickness of which are 1000 and 25 μm, respectively, and scanned by the micron-CT. Three dimensional images of the ant's heads were obtained with a spatial resolution of 4 μm. It was found that, in accordance with the strong dependence on atomic number of photo ionization cross-sections, the mandibular gland of ant contains heavier elements, and moreover, the CT-image of living ant anaesthetized by chloroform is quite different from that of a dead ant dipped in formalin.

  5. Lymph node imaging by ultrarapid 3D angiography

    Purpose: A report on observations of lymph node images obtained by gadolinium-enhanced 3D MR angiography (MRA). Methods: Ultrarapid MRA (TR, TE, FA - 5 or 6.4 ms, 1.9 or 2.8 ms, 30-40 degrees) with 0.2 mmol/kg BW Gd-DTPA and 20 ml physiological saline. Start after completion of injection. Single series of the pelvis-thigh as well as head-neck regions by use of a phased array coil with a 1.5 T Magnetom Vision or a 1.0 T Magnetom Harmony (Siemens, Erlangen). We report on lymph node imaging in 4 patients, 2 of whom exhibited benign changes and 2 further metastases. In 1 patient with extensive lymph node metastases of a malignant melanoma, color-Doppler sonography as color-flow angiography (CFA) was used as a comparative method. Results: Lymph node imaging by contrast medium-enhanced ultrarapid 3D MRA apparently resulted from their vessels. Thus, arterially-supplied metastases and inflammatory enlarged lymph nodes were well visualized while those with a.v. shunts or poor vascular supply in tumor necroses were poorly imaged. Conclusions: Further investigations are required with regard to the visualization of lymph nodes in other parts of the body as well as a possible differentiation between benign and malignant lesions. (orig.)

  6. Ice shelf melt rates and 3D imaging

    Lewis, Cameron Scott

    Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf's grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3D image of an ice shelf. The 3D images can then be used to obtain a more complete estimate of the bottom melt rates of ice shelves.

  7. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  8. Geometric Aspects in 3D Biomedical Image Processing

    Thévenaz, P; Unser, M.

    1998-01-01

    We present some issues that arise when a geometric transformation is performed on an image or a volume. In particular, we illustrate the well-known problems of blocking, blurring, aliasing and ringing. Although the solution to these problems is trivial in an analog (optical) image processing system, their solution in a discrete (numeric) context is much more difficult. The modern trend of biomedical image processing is to fight these artifacts by using more sophisticated models that emphasize...

  9. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenge...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....... arise when using data from multiple frequencies for imaging of biological targets. In this paper, the performance of a multi-frequency algorithm, in which measurement data from several different frequencies are used at once, is compared with a stepped-frequency algorithm, in which images reconstructed...

  10. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  11. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  12. Effective classification of 3D image data using partitioning methods

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  13. Ultra-realistic 3-D imaging based on colour holography

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  14. Image-Based 3D Face Modeling System

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  15. Extracting 3D layout from a single image using global image structures.

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  16. Cordless hand-held optical 3D sensor

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  17. Stereoscopic particle tracking for 3D touch, vision and closed-loop control in optical tweezers

    Force measurement in an interactive 3D micromanipulation system can allow the user to make delicate adjustments, and to explore surfaces with touch as well as vision. We present a system to achieve this on the micron scale using stereoscopic particle tracking combined with holographic optical tweezers, which can track particles with nanometre accuracy. 2D tracking of particles in each of the stereo images gives 3D positions for each particle. This takes less than 200 µs per image pair, using a 1D 'symmetry transform' applied to each row and column of a 2D image, which can maintain tracking of particles throughout the 10 µm axial range. The only parameters required are the geometry of the imaging system, and therefore there is no need to recalibrate for different particle sizes or refractive indices. Consequently, we can calculate the force exerted by the optical trap in real time at 1 kilohertz, allowing us to implement a force-feedback interface (with a loop rate of 400 Hz). In combination with our OpenGL hologram calculation engine, the system has a closed-loop bandwidth of 20 Hz. This allows us to stabilize trapped particles axially through active feedback, cancelling out some Brownian motion. For the weak traps we use here (spring constant k≈2 pN µm−1), this results in a threefold increase in axial stiffness. We demonstrate the 3D interface by probing an oil droplet, mapping out its surface in the y–z plane

  18. A Jones matrix formalism for simulating 3D Polarised Light Imaging of brain tissue

    Menzel, Miriam; De Raedt, Hans; Reckfort, Julia; Amunts, Katrin; Axer, Markus

    2015-01-01

    The neuroimaging technique 3D Polarised Light Imaging (3D-PLI) provides a high-resolution reconstruction of nerve fibres in human post-mortem brains. The orientations of the fibres are derived from birefringence measurements of histological brain sections assuming that the nerve fibres - consisting of an axon and a surrounding myelin sheath - are uniaxial birefringent and that the measured optic axis is oriented in direction of the nerve fibres (macroscopic model). Although experimental studies support this assumption, the molecular structure of the myelin sheath suggests that the birefringence of a nerve fibre can be described more precisely by multiple optic axes oriented radially around the fibre axis (microscopic model). In this paper, we compare the use of the macroscopic and the microscopic model for simulating 3D-PLI by means of the Jones matrix formalism. The simulations show that the macroscopic model ensures a reliable estimation of the fibre orientations as long as the polarimeter does not resolve ...

  19. 3D parameter reconstruction in hyperspectral diffuse optical tomography

    Saibaba, Arvind K.; Krishnamurthy, Nishanth; Anderson, Pamela G.; Kainerstorfer, Jana M.; Sassaroli, Angelo; Miller, Eric L.; Fantini, Sergio; Kilmer, Misha E.

    2015-03-01

    The imaging of shape perturbation and chromophore concentration using Diffuse Optical Tomography (DOT) data can be mathematically described as an ill-posed and non-linear inverse problem. The reconstruction algorithm for hyperspectral data using a linearized Born model is prohibitively expensive, both in terms of computation and memory. We model the shape of the perturbation using parametric level-set approach (PaLS). We discuss novel computational strategies for reducing the computational cost based on a Krylov subspace approach for parameteric linear systems and a compression strategy for the parameter-to-observation map. We will demonstrate the validity of our approach by comparison with experiments.

  20. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  1. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald, E-mail: theobold.fuchs@iis.fraunhofer.de; Schön, Tobias, E-mail: theobold.fuchs@iis.fraunhofer.de; Sukowski, Frank [Fraunhofer Development Center X-ray Technology EZRT, Flugplatzstr. 75, 90768 Fürth (Germany); Dittmann, Jonas; Hanke, Randolf [Chair of X-ray Microscopy, Institute of Physics and Astronomy, Julius-Maximilian-University Würzburg, Josef-Martin-Weg 63, 97074 Würzburg (Germany)

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  2. 3D Reconstruction of virtual colon structures from colonoscopy images.

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  3. X-ray scattering in the elastic regime as source for 3D imaging reconstruction technique

    Kocifaj, Miroslav; Mego, Michal

    2015-11-01

    X-ray beams propagate across a target object before they are projected onto a regularly spaced array of detectors to produce a routine X-ray image. A 3D attenuation coefficient distribution is obtained by tomographic reconstruction where scattering is usually regarded as a source of parasitic signals which increase the level of electromagnetic noise that is difficult to eliminate. However, the elastically scattered radiation could be a valuable source of information, because it can provide a 3D topology of electron densities and thus contribute significantly to the optical characterization of the scanned object. The scattering and attenuation data form a complementary base for concurrent retrieval of both electron density and attenuation coefficient distributions. In this paper we developed the 3D reconstruction method that combines both data inputs and produces better image resolution compared to traditional technology.

  4. 3D electrical tomographic imaging using vertical arrays of electrodes

    Murphy, S. C.; Stanley, S. J.; Rhodes, D.; York, T. A.

    2006-11-01

    Linear arrays of electrodes in conjunction with electrical impedance tomography have been used to spatially interrogate industrial processes that have only limited access for sensor placement. This paper explores the compromises that are to be expected when using a small number of vertically positioned linear arrays to facilitate 3D imaging using electrical tomography. A configuration with three arrays is found to give reasonable results when compared with a 'conventional' arrangement of circumferential electrodes. A single array yields highly localized sensitivity that struggles to image the whole space. Strategies have been tested on a small-scale version of a sludge settling application that is of relevance to the industrial sponsor. A new electrode excitation strategy, referred to here as 'planar cross drive', is found to give superior results to an extended version of the adjacent electrodes technique due to the improved uniformity of the sensitivity across the domain. Recommendations are suggested for parameters to inform the scale-up to industrial vessels.

  5. Mono- and multistatic polarimetric sparse aperture 3D SAR imaging

    DeGraaf, Stuart; Twigg, Charles; Phillips, Louis

    2008-04-01

    SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.

  6. 3D printing method for freeform fabrication of optical phantoms simulating heterogeneous biological tissue

    Wang, Minjie; Shen, Shuwei; Yang, Jie; Dong, Erbao; Xu, Ronald

    2014-03-01

    The performance of biomedical optical imaging devices heavily relies on appropriate calibration. However, many of existing calibration phantoms for biomedical optical devices are based on homogenous materials without considering the multi-layer heterogeneous structures observed in biological tissue. Using such a phantom for optical calibration may result in measurement bias. To overcome this problem, we propose a 3D printing method for freeform fabrication of tissue simulating phantoms with multilayer heterogeneous structure. The phantom simulates not only the morphologic characteristics of biological tissue but also absorption and scattering properties. The printing system is based on a 3D motion platform with coordinated control of the DC motors. A special jet nozzle is designed to mix base, scattering, and absorption materials at different ratios. 3D tissue structures are fabricated through layer-by-layer printing with selective deposition of phantom materials of different ingredients. Different mixed ratios of base, scattering and absorption materials have been tested in order to optimize the printing outcome. A spectrometer and a tissue spectrophotometer are used for characterizing phantom absorption and scattering properties. The goal of this project is to fabricate skin tissue simulating phantoms as a traceable standard for the calibration of biomedical optical spectral devices.

  7. Model based 3D segmentation and OCT image undistortion of percutaneous implants.

    Müller, Oliver; Donner, Sabine; Klinder, Tobias; Dragon, Ralf; Bartsch, Ivonne; Witte, Frank; Krüger, Alexander; Heisterkamp, Alexander; Rosenhahn, Bodo

    2011-01-01

    Optical Coherence Tomography (OCT) is a noninvasive imaging technique which is used here for in vivo biocompatibility studies of percutaneous implants. A prerequisite for a morphometric analysis of the OCT images is the correction of optical distortions caused by the index of refraction in the tissue. We propose a fully automatic approach for 3D segmentation of percutaneous implants using Markov random fields. Refraction correction is done by using the subcutaneous implant base as a prior for model based estimation of the refractive index using a generalized Hough transform. Experiments show the competitiveness of our algorithm towards manual segmentations done by experts. PMID:22003731

  8. Segmentation of the Optic Disc in 3-D OCT Scans of the Optic Nerve Head

    Lee, Kyungmoo; Niemeijer, Meindert; Garvin, Mona K.; Kwon, Young H.; Sonka, Milan; Abràmoff, Michael D.

    2009-01-01

    Glaucoma is the second leading ocular disease causing blindness due to gradual damage to the optic nerve and resultant visual field loss. Segmentations of the optic disc cup and neuroretinal rim can provide important parameters for detecting and tracking this disease. The purpose of this study is to describe and evaluate a method that can automatically segment the optic disc cup and rim in spectral-domain 3-D OCT (SD-OCT) volumes. Four intraretinal surfaces were segmented using a fast multisc...

  9. 3D Image Sensor based on Parallax Motion

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  10. Block matching 3D random noise filtering for absorption optical projection tomography

    Fumene Feruglio, P; Vinegoni, C; Weissleder, R [Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, 185 Cambridge Street, Boston, MA 02114 (United States); Gros, J [Department of Genetics, Harvard Medical School, 77 Avenue Louis Pasteur, Boston MA 02115 (United States); Sbarbati, A, E-mail: cvinegoni@mgh.harvard.ed [Department of Morphological and Biomedical Sciences, University of Verona, Strada Le Grazie 8, 37134 Verona (Italy)

    2010-09-21

    Absorption and emission optical projection tomography (OPT), alternatively referred to as optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT), are recently developed three-dimensional imaging techniques with value for developmental biology and ex vivo gene expression studies. The techniques' principles are similar to the ones used for x-ray computed tomography and are based on the approximation of negligible light scattering in optically cleared samples. The optical clearing is achieved by a chemical procedure which aims at substituting the cellular fluids within the sample with a cell membranes' index matching solution. Once cleared the sample presents very low scattering and is then illuminated with a light collimated beam whose intensity is captured in transillumination mode by a CCD camera. Different projection images of the sample are subsequently obtained over a 360{sup 0} full rotation, and a standard backprojection algorithm can be used in a similar fashion as for x-ray tomography in order to obtain absorption maps. Because not all biological samples present significant absorption contrast, it is not always possible to obtain projections with a good signal-to-noise ratio, a condition necessary to achieve high-quality tomographic reconstructions. Such is the case for example, for early stage's embryos. In this work we demonstrate how, through the use of a random noise removal algorithm, the image quality of the reconstructions can be considerably improved even when the noise is strongly present in the acquired projections. Specifically, we implemented a block matching 3D (BM3D) filter applying it separately on each acquired transillumination projection before performing a complete three-dimensional tomographical reconstruction. To test the efficiency of the adopted filtering scheme, a phantom and a real biological sample were processed. In both cases, the BM3D filter led to a signal-to-noise ratio

  11. Performance of a commercial optical CT scanner and polymer gel dosimeters for 3-D dose verification

    Performance analysis of a commercial three-dimensional (3-D) dose mapping system based on optical CT scanning of polymer gels is presented. The system consists of BANGreg3 polymer gels (MGS Research, Inc., Madison, CT), OCTOPUSTM laser CT scanner (MGS Research, Inc., Madison, CT), and an in-house developed software for optical CT image reconstruction and 3-D dose distribution comparison between the gel, film measurements and the radiation therapy treatment plans. Various sources of image noise (digitization, electronic, optical, and mechanical) generated by the scanner as well as optical uniformity of the polymer gel are analyzed. The performance of the scanner is further evaluated in terms of the reproducibility of the data acquisition process, the uncertainties at different levels of reconstructed optical density per unit length and the effects of scanning parameters. It is demonstrated that for BANGregistered3 gel phantoms held in cylindrical plastic containers, the relative dose distribution can be reproduced by the scanner with an overall uncertainty of about 3% within approximately 75% of the radius of the container. In regions located closer to the container wall, however, the scanner generates erroneous optical density values that arise from the reflection and refraction of the laser rays at the interface between the gel and the container. The analysis of the accuracy of the polymer gel dosimeter is exemplified by the comparison of the gel/OCT-derived dose distributions with those from film measurements and a commercial treatment planning system (Cadplan, Varian Corporation, Palo Alto, CA) for a 6 cmx6 cm single field of 6 MV x rays and a 3-D conformal radiotherapy (3DCRT) plan. The gel measurements agree with the treatment plans and the film measurements within the '3%-or-2 mm' criterion throughout the usable, artifact-free central region of the gel volume. Discrepancies among the three data sets are analyzed

  12. Combining supine MRI and 3D optical scanning for improved surgical planning of breast conserving surgeries

    Pallone, Matthew J.; Poplack, Steven P.; Barth, Richard J., Jr.; Paulsen, Keith D.

    2012-02-01

    Image-guided wire localization is the current standard of care for the excision of non-palpable carcinomas during breast conserving surgeries (BCS). The efficacy of this technique depends upon the accuracy of wire placement, maintenance of the fixed wire position (despite patient movement), and the surgeon's understanding of the spatial relationship between the wire and tumor. Notably, breast shape can vary significantly between the imaging and surgical positions. Despite this method of localization, re-excision is needed in approximately 30% of patients due to the proximity of cancer to the specimen margins. These limitations make wire localization an inefficient and imprecise procedure. Alternatively, we investigate a method of image registration and finite element (FE) deformation which correlates preoperative supine MRIs with 3D optical scans of the breast surface. MRI of the breast can accurately define the extents of very small cancers. Furthermore, supine breast MR reduces the amount of tissue deformation between the imaging and surgical positions. At the time of surgery, the surface contour of the breast may be imaged using a handheld 3D laser scanner. With the MR images segmented by tissue type, the two scans are approximately registered using fiducial markers present in both acquisitions. The segmented MRI breast volume is then deformed to match the optical surface using a FE mechanical model of breast tissue. The resulting images provide the surgeon with 3D views and measurements of the tumor shape, volume, and position within the breast as it appears during surgery which may improve surgical guidance and obviate the need for wire localization.

  13. Fast 3-d tomographic microwave imaging for breast cancer detection.

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  14. Fast 3D subsurface imaging with stepped-frequency GPR

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  15. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-06-01

    Full Text Available Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.

  16. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  17. 3D imaging of semiconductor components by discrete laminography

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach

  18. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  19. 3-D MR imaging of ectopia vasa deferentia

    Goenka, Ajit Harishkumar; Parihar, Mohan; Sharma, Raju; Gupta, Arun Kumar [All India Institute of Medical Sciences (AIIMS), Department of Radiology, New Delhi (India); Bhatnagar, Veereshwar [All India Institute of Medical Sciences (AIIMS), Department of Paediatric Surgery, New Delhi (India)

    2009-11-15

    Ectopia vasa deferentia is a complex anomaly characterized by abnormal termination of the urethral end of the vas deferens into the urinary tract due to an incompletely understood developmental error of the distal Wolffian duct. Associated anomalies of the lower gastrointestinal tract and upper urinary tract are also commonly present due to closely related embryological development. Although around 32 cases have been reported in the literature, the MR appearance of this condition has not been previously described. We report a child with high anorectal malformation who was found to have ectopia vasa deferentia, crossed fused renal ectopia and type II caudal regression syndrome on MR examination. In addition to the salient features of this entity on reconstructed MR images, the important role of 3-D MRI in establishing an unequivocal diagnosis and its potential in facilitating individually tailored management is also highlighted. (orig.)

  20. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  1. On a prototype for a new distributed database of volume data obtained by 3D imaging

    Marabini, Roberto; Vaquerizo, C.; Fernandez, Jose J.; Carazo García, José María; Ladjadj, M.; Odesanya, O.; Frank, J.

    1994-01-01

    R. Marabini ; C. Vaquerizo ; Jose J. Fernandez ; Jose Maria Carazo ; M. Ladjadj ; O. Odesanya ; J. Frank, "Prototype for a new distributed database of volume data obtained by 3D imaging"Proc. SPIE 2359, Visualization in Biomedical Computing 1994, 466 (September 9, 1994). Society of Photo‑Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commerci...

  2. Air-structured optical fibre drawn from a 3D-printed preform

    Cook, Kevin; Leon-Saval, Sergio; Reid, Zane; Hossain, Md Arafat; Comatti, Jade-Edouard; Luo, Yanhua; Peng, Gang-Ding

    2016-01-01

    A structured optical fibre is drawn from a 3D-printed structured preform. Preforms containing a single ring of holes around the core are fabricated using filament made from a modified butadiene polymer. More broadly, 3D printers capable of processing soft glasses, silica and other materials are likely to come on line in the not-so distant future. 3D printing of optical preforms signals a new milestone in optical fibre manufacture.

  3. GPU-accelerated denoising of 3D magnetic resonance images

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  4. Spectral ladar: towards active 3D multispectral imaging

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  5. Development and evaluation of a semiautomatic 3D segmentation technique of the carotid arteries from 3D ultrasound images

    Gill, Jeremy D.; Ladak, Hanif M.; Steinman, David A.; Fenster, Aaron

    1999-05-01

    In this paper, we report on a semi-automatic approach to segmentation of carotid arteries from 3D ultrasound (US) images. Our method uses a deformable model which first is rapidly inflated to approximately find the boundary of the artery, then is further deformed using image-based forces to better localize the boundary. An operator is required to initialize the model by selecting a position in the 3D US image, which is within the carotid vessel. Since the choice of position is user-defined, and therefore arbitrary, there is an inherent variability in the position and shape of the final segmented boundary. We have assessed the performance of our segmentation method by examining the local variability in boundary shape as the initial selected position is varied in a freehand 3D US image of a human carotid bifurcation. Our results indicate that high variability in boundary position occurs in regions where either the segmented boundary is highly curved, or the 3D US image has poorly defined vessel edges.

  6. High resolution 3D imaging of synchrotron generated microbeams

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  7. High resolution 3D imaging of synchrotron generated microbeams

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery

  8. 3D Slicer as an image computing platform for the Quantitative Imaging Network.

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

    2012-11-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  9. 3D modeling of satellite spectral images, radiation budget and energy budget of urban landscapes

    Gastellu-Etchegorry, J. P.

    2008-12-01

    DART EB is a model that is being developed for simulating the 3D (3 dimensional) energy budget of urban and natural scenes, possibly with topography and atmosphere. It simulates all non radiative energy mechanisms (heat conduction, turbulent momentum and heat fluxes, water reservoir evolution, etc.). It uses DART model (Discrete Anisotropic Radiative Transfer) for simulating radiative mechanisms: 3D radiative budget of 3D scenes and their remote sensing images expressed in terms of reflectance or brightness temperature values, for any atmosphere, wavelength, sun/view direction, altitude and spatial resolution. It uses an innovative multispectral approach (ray tracing, exact kernel, discrete ordinate techniques) over the whole optical domain. This paper presents two major and recent improvements of DART for adapting it to urban canopies. (1) Simulation of the geometry and optical characteristics of urban elements (houses, etc.). (2) Modeling of thermal infrared emission by vegetation and urban elements. The new DART version was used in the context of the CAPITOUL project. For that, districts of the Toulouse urban data base (Autocad format) were translated into DART scenes. This allowed us to simulate visible, near infrared and thermal infrared satellite images of Toulouse districts. Moreover, the 3D radiation budget was used by DARTEB for simulating the time evolution of a number of geophysical quantities of various surface elements (roads, walls, roofs). Results were successfully compared with ground measurements of the CAPITOUL project.

  10. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  11. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-07-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.

  12. Preparing diagnostic 3D images for image registration with planning CT images

    Purpose: Pre-radiotherapy (pre-RT) tomographic images acquired for diagnostic purposes often contain important tumor and/or normal tissue information which is poorly defined or absent in planning CT images. Our two years of clinical experience has shown that computer-assisted 3D registration of pre-RT images with planning CT images often plays an indispensable role in accurate treatment volume definition. Often the only available format of the diagnostic images is film from which the original 3D digital data must be reconstructed. In addition, any digital data, whether reconstructed or not, must be put into a form suitable for incorporation into the treatment planning system. The purpose of this investigation was to identify all problems that must be overcome before this data is suitable for clinical use. Materials and Methods: In the past two years we have 3D-reconstructed 300 diagnostic images from film and digital sources. As a problem was discovered we built a software tool to correct it. In time we collected a large set of such tools and found that they must be applied in a specific order to achieve the correct reconstruction. Finally, a toolkit (ediScan) was built that made all these tools available in the proper manner via a pleasant yet efficient mouse-based user interface. Results: Problems we discovered included different magnifications, shifted display centers, non-parallel image planes, image planes not perpendicular to the long axis of the table-top (shearing), irregularly spaced scans, non contiguous scan volumes, multiple slices per film, different orientations for slice axes (e.g. left-right reversal), slices printed at window settings corresponding to tissues of interest for diagnostic purposes, and printing artifacts. We have learned that the specific steps to correct these problems, in order of application, are: Also, we found that fast feedback and large image capacity (at least 2000 x 2000 12-bit pixels) are essential for practical application

  13. Architectures and algorithms for all-optical 3D signal processing

    Giglmayr, Josef

    1999-07-01

    All-optical signal processing by >= 2D lightwave circuits (LCs) is (i) aimed to allow the (later) inclusion of the frequency domain and is (ii) subject to photonic integration and thus the architectural and algorithmic framework has to be prepared carefully. Much work has been done in >= 2D algebraic system theory/modern control theory which has been applied in the electronic field of signal and image processing. For the application to modeling, analysis and design of the proposed 3D lightwave circuits (LCs) some elements are needed to describe and evalute the system efficiency as the number of system states of 3D LCs increases dramatically with regard to the number of i/o. Several problems, arising throughput such an attempt, are made transparent and solutions are proposed.

  14. Fast 3D T1-weighted brain imaging at 3 Tesla with modified 3D FLASH sequence

    Longitudinal relaxation times (T1) of white and gray matter become close at high magnetic field. Therefore, classical T1 sensitive methods, like spoiled FLASH fail to give a sufficient contrast in human brain imaging at 3 Tesla. An excellent T1 contrast can be achieved at high field by gradient echo imaging with a preparatory inversion pulse. The inversion recovery (IR) preparation can be combined with a fast 2D gradient echo scans. In this paper we present an application of this technique to rapid 3-dimensional imaging. New technique called 3D SIR FLASH was implemented on Burker MSLX system equipped with a 3T, 90 cm horizontal bore magnet working in Centre Hospitalier in Rouffach, France. The new technique was used for comparison of MRI images of healthy volunteers obtained with a traditional 3D imaging. White and gray matter are clearly distinguishable when 3D SIR FLASH is used. The total acquisition time for 128x128x128 image was 5 minutes. Three dimensional visualization with facet representation of surfaces and oblique sections was done off-line on the INDIGO Extreme workstation. New technique is widely used in FORENAP, Centre Hospitalier in Reuffach, Alsace. (author)

  15. Multimodal Registration and Fusion for 3D Thermal Imaging

    Moulay A. Akhloufi; Benjamin Verney

    2015-01-01

    3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind ...

  16. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  17. Preclinical, fluorescence and diffuse optical tomography: non-contact instrumentation, modeling and time-resolved 3D reconstruction

    Time-Resolved Diffuse Optical Tomography (TR-DOT) is a new non-invasive imaging technique increasingly used in the clinical and preclinical fields. It yields optical absorption and scattering maps of the explored organs, and related physiological parameters. Time-Resolved Fluorescence Diffuse Optical Tomography (TR-FDOT) is based on the detection of fluorescence photons. It provides spatio-temporal maps of fluorescent probe concentrations and life times, and allows access to metabolic and molecular imaging which is important for diagnosis and therapeutic monitoring, particularly in oncology. The main goal of this thesis was to reconstruct 3D TR-DOT/TR-FDOT images of small animals using time-resolved optical technology. Data were acquired using optical fibers fixed around the animal without contact with its surface. The work was achieved in four steps: 1)- Setting up an imaging device to record the 3D coordinates of an animal's surface; 2)- Modeling the no-contact approach to solve the forward problem; 3)- Processing of the measured signals taking into account the impulse response of the device; 4)- Implementation of a new image reconstruction method based on a selection of carefully chosen points. As a result, good-quality 3D optical images were obtained owing to reduced cross-talk between absorption and scattering. Moreover, the computation time was cut down, compared to full-time methods using whole temporal profiles. (author)

  18. A 3-D fluorescence imaging system incorporating structured illumination technology

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  19. Computational ghost imaging versus imaging laser radar for 3D imaging

    Hardy, Nicholas D

    2012-01-01

    Ghost imaging has been receiving increasing interest for possible use as a remote-sensing system. There has been little comparison, however, between ghost imaging and the imaging laser radars with which it would be competing. Toward that end, this paper presents a performance comparison between a pulsed, computational ghost imager and a pulsed, floodlight-illumination imaging laser radar. Both are considered for range-resolving (3D) imaging of a collection of rough-surfaced objects at standoff ranges in the presence of atmospheric turbulence. Their spatial resolutions and signal-to-noise ratios are evaluated as functions of the system parameters, and these results are used to assess each system's performance trade-offs. Scenarios in which a reflective ghost-imaging system has advantages over a laser radar are identified.

  20. 3D imaging of nanomaterials by discrete tomography

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi2 nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  1. Orthodontic treatment plan changed by 3D images

    Clinical application of CBCT is most often enforced in dental phenomenon of impacted teeth, hyperodontia, transposition, ankyloses or root resorption and other pathologies in the maxillofacial area. The goal, we put ourselves, is to show how the information from 3D images changes the protocol of the orthodontic treatment. The material, we presented six our clinical cases and the change in the plan of the treatment, which has used after analyzing the information carried on the three planes of CBCT. These cases are casuistic in the orthodontic practice and require individual approach to each of them during their analysis and decision taken. The discussion made by us is in line with reveal of the impacted teeth, where we need to evaluate their vertical depth and mediodistal ratios with the bond structures. At patients with hyperodontia, the assessment is of outmost importance to decide which of the teeth to be extracted and which one to be arranged into the dental arch. The conclusion we make is that diagnostic information is essential for decisions about treatment plan. The exact graphs will lead to better treatment plan and more predictable results. (authors) Key words: CBCT. IMPACTED CANINES. HYPERODONTIA. TRANSPOSITION

  2. Parallel Imaging of 3D Surface Profile with Space-Division Multiplexing

    Hyung Seok Lee

    2016-01-01

    Full Text Available We have developed a modified optical frequency domain imaging (OFDI system that performs parallel imaging of three-dimensional (3D surface profiles by using the space division multiplexing (SDM method with dual-area swept sourced beams. We have also demonstrated that 3D surface information for two different areas could be well obtained in a same time with only one camera by our method. In this study, double field of views (FOVs of 11.16 mm × 5.92 mm were achieved within 0.5 s. Height range for each FOV was 460 µm and axial and transverse resolutions were 3.6 and 5.52 µm, respectively.

  3. Parallel Imaging of 3D Surface Profile with Space-Division Multiplexing

    Lee, Hyung Seok; Cho, Soon-Woo; Kim, Gyeong Hun; Jeong, Myung Yung; Won, Young Jae; Kim, Chang-Seok

    2016-01-01

    We have developed a modified optical frequency domain imaging (OFDI) system that performs parallel imaging of three-dimensional (3D) surface profiles by using the space division multiplexing (SDM) method with dual-area swept sourced beams. We have also demonstrated that 3D surface information for two different areas could be well obtained in a same time with only one camera by our method. In this study, double field of views (FOVs) of 11.16 mm × 5.92 mm were achieved within 0.5 s. Height range for each FOV was 460 µm and axial and transverse resolutions were 3.6 and 5.52 µm, respectively. PMID:26805840

  4. A generic synthetic image generator package for the evaluation of 3D Digital Image Correlation and other computer vision-based measurement techniques

    Garcia, Dorian; Orteu, Jean-José; Robert, Laurent; Wattrisse, Bertrand; Bugarin, Florian

    2013-01-01

    Stereo digital image correlation (also called 3D DIC) is a common measurement technique in experimental mechanics for measuring 3D shapes or 3D displacement/strain fields, in research laboratories as well as in industry. Nevertheless, like most of the optical full-field measurement techniques, 3D DIC suffers from a lack of information about its metrological performances. For the 3D DIC technique to be fully accepted as a standard measurement technique it is of key importance to assess its mea...

  5. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  6. 3D spectral imaging system for anterior chamber metrology

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  7. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    Sturm, Peter; Maybank, Steve

    1999-01-01

    We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  8. Medical image analysis of 3D CT images based on extensions of Haralick texture features

    Tesař, Ludvík; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S.

    2008-01-01

    Roč. 32, č. 6 (2008), s. 513-520. ISSN 0895-6111 R&D Projects: GA AV ČR 1ET101050403; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * Gaussian mixture model * 3D image analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.192, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/tesar-medical image analysis of 3d ct image s based on extensions of haralick texture features.pdf

  9. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  10. GammaModeler 3-D gamma-ray imaging technology

    The 3-D GammaModelertrademark system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModelertrademark system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders

  11. 3-D Reconstruction of Medical Image Using Wavelet Transform and Snake Model

    Jinyong Cheng

    2009-12-01

    Full Text Available Medical image segmentation is an important step in 3-D reconstruction, and 3-D reconstruction from medical images is an important application of computer graphics and biomedicine image processing. An improved image segmentation method which is suitable for 3-D reconstruction is presented in this paper. A 3-D reconstruction algorithm is used to reconstruct the 3-D model from medical images. Rough edge is obtained by multi-scale wavelet transform at first. With the rough edge, improved gradient vector flow snake model is used and the object contour in the image is found. In the experiments, we reconstruct 3-D models of kidney, liver and brain putamen. The performances of the experiments indicate that the new algorithm can produce accurate 3-D reconstruction.

  12. Superimposing of virtual graphics and real image based on 3D CAD information

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  13. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  14. NDE of spacecraft materials using 3D Compton backscatter x-ray imaging

    Burke, E. R.; Grubsky, V.; Romanov, V.; Shoemaker, K.

    2016-02-01

    We present the results of testing of the NDE performance of a Compton Imaging Tomography (CIT) system for single-sided, penetrating 3D inspection. The system was recently developed by Physical Optics Corporation (POC) and delivered to NASA for testing and evaluation. The CIT technology is based on 3D structure mapping by collecting the information on density profiles in multiple object cross sections through hard x-ray Compton backscatter imaging. The individual cross sections are processed and fused together in software, generating a 3D map of the density profile of the object which can then be analyzed slice-by-slice in x, y, or z directions. The developed CIT scanner is based on a 200-kV x-ray source, flat-panel x-ray detector (FPD), and apodized x-ray imaging optics. The CIT technology is particularly well suited to the NDE of lightweight aerospace materials, such as the thermal protection system (TPS) ceramic and composite materials, micrometeoroid and orbital debris (MMOD) shielding, spacecraft pressure walls, inflatable habitat structures, composite overwrapped pressure vessels (COPVs), and aluminum honeycomb materials. The current system provides 3D localization of defects and features with field of view 20x12x8 cm3 and spatial resolution ˜2 mm. In this paper, we review several aerospace NDE applications of the CIT technology, with particular emphasis on TPS. Based on the analysis of the testing results, we provide recommendations for continued development on TPS applications that can benefit the most from the unique capabilities of this new NDE technology.

  15. Step-index optical fibre drawn from 3D printed preforms

    CooK, Kevin; Canning, John; Chartier, Loic; Athanaze, Tristan; Hossain, Md Arafat; Han, Chunyang; Comatti, Jade-Edouard; Luo, Yanhua; Peng, Gang-Ding

    2016-01-01

    Optical fibre is drawn from a dual-head 3D printer fabricated preform made of two optically transparent plastics with a high index core (NA ~ 0.25, V > 60). The asymmetry observed in the fibre arises from asymmetry in the 3D printing process. The highly multi-mode optical fibre has losses measured by cut-back as low as {\\alpha} ~ 0.44 dB/cm in the near IR.

  16. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  17. Dual wavelength optical CT scanning of anthropomorphic shaped 3D dosimeters

    To create an optical density map of 3D dosimeter phantoms, the ratio of the transmission profile (either a line or planar) acquired after irradiation of the dosimeter and a pre-irradiation reference scan of the same dosimeter phantom is taken. Any uncertainty in repositioning of the phantom may result in an uncertainty in the optical density map and finally also in the derived dose maps. Correct repositioning is paramount when scanning noncylindrical dosimeter phantoms as any repositioning error will give rise to severe imaging artifacts. We hereby propose a different scanning technique that does not require any repositioning of the dosimeter phantom. In this method, no pre-irradiation san is recorded but the dosimeter phantom is scanned twice with light at two different wavelengths. It is demonstrated that this method is accurate in scanning non-cylindrical anthropomorphic shaped phantoms

  18. Segmented images and 3D images for studying the anatomical structures in MRIs

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  19. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  20. PSF Rotation with Changing Defocus and Applications to 3D Imaging for Space Situational Awareness

    Prasad, S.; Kumar, R.

    2013-09-01

    For a clear, well corrected imaging aperture in space, the point-spread function (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

  1. Image Reconstruction from 2D stack of MRI/CT to 3D using Shapelets

    Arathi T

    2014-12-01

    Full Text Available Image reconstruction is an active research field, due to the increasing need for geometric 3D models in movie industry, games, virtual environments and in medical fields. 3D image reconstruction aims to arrive at the 3D model of an object, from its 2D images taken at different viewing angles. Medical images are multimodal, which includes MRI, CT scan image, PET and SPECT images. Of these, MRI and CT scan images of an organ when taken, is available as a stack of 2D images, taken at different angles. This 2D stack of images is used to get a 3D view of the organ of interest, to aid doctors in easier diagnosis. Existing 3D reconstruction techniques are voxel based techniques, which tries to reconstruct the 3D view based on the intensity value stored at each voxel location. These techniques don’t make use of the shape/depth information available in the 2D image stack. In this work, a 3D reconstruction technique for MRI/CT 2D image stack, based on Shapelets has been proposed. Here, the shape/depth information available in each 2D image in the image stack is manipulated to get a 3D reconstruction, which gives a more accurate 3D view of the organ of interest. Experimental results exhibit the efficiency of this proposed technique.

  2. Optical tomographic in-air scanner for external radiation beam 3D gel dosimetry

    Full text: Optical CT scanners are used to measure 3D radiation dose distributions in radiosensitive gels. For radiotherapy dose verification, 3D dose measurements are useful for verification of complex linear accelerator treatment planning and delivery techniques. Presently optical CTs require the use of a liquid bath to match the refractive index of the gel to minimise refraction of the light rays leading to distortion and artifacts. This work aims to develop a technique for scanning gel samples in free-air, without the requirement for a matching liquid bath. The scanner uses a He-Ne laser beam, fanned across the acrylic cylindrical gel container by a rotating mirror. The gel container was designed to produce parallel light ray paths through the gel. A pin phantom was used to quantify geometrical distortion of the reconstructed image, while uniform field exposures were used to consider noise, uniformity and artifacts. Small diameter wires provided an indication of the spatial resolution of the scanner. Pin phantom scans show geometrical distortion comparable to scanners using matching fluid baths. Noise, uniformity and artifacts were not found to be major limitations for this scanner approach. Spatial resolution was limited by laser beam spot size, typically 0.4 mm full width half maximum. A free-air optical CT scanner has been developed with the advantage of scanning without a matching fluid bath. Test results show it has potential to provide suitable quality 3D dosimetry measurements for external beam dose verification, while offering significant advantages in convenience and efficiency for routine use.

  3. Optical-CT 3D Dosimetry Using Fresnel Lenses with Minimal Refractive-Index Matching Fluid.

    Steven Bache

    Full Text Available Telecentric optical computed tomography (optical-CT is a state-of-the-art method for visualizing and quantifying 3-dimensional dose distributions in radiochromic dosimeters. In this work a prototype telecentric system (DFOS-Duke Fresnel Optical-CT Scanner is evaluated which incorporates two substantial design changes: the use of Fresnel lenses (reducing lens costs from $10-30K t0 $1-3K and the use of a 'solid tank' (which reduces noise, and the volume of refractively matched fluid from 1ltr to 10cc. The efficacy of DFOS was evaluated by direct comparison against commissioned scanners in our lab. Measured dose distributions from all systems were compared against the predicted dose distributions from a commissioned treatment planning system (TPS. Three treatment plans were investigated including a simple four-field box treatment, a multiple small field delivery, and a complex IMRT treatment. Dosimeters were imaged within 2h post irradiation, using consistent scanning techniques (360 projections acquired at 1 degree intervals, reconstruction at 2mm. DFOS efficacy was evaluated through inspection of dose line-profiles, and 2D and 3D dose and gamma maps. DFOS/TPS gamma pass rates with 3%/3mm dose difference/distance-to-agreement criteria ranged from 89.3% to 92.2%, compared to from 95.6% to 99.0% obtained with the commissioned system. The 3D gamma pass rate between the commissioned system and DFOS was 98.2%. The typical noise rates in DFOS reconstructions were up to 3%, compared to under 2% for the commissioned system. In conclusion, while the introduction of a solid tank proved advantageous with regards to cost and convenience, further work is required to improve the image quality and dose reconstruction accuracy of the new DFOS optical-CT system.

  4. Optical-CT 3D Dosimetry Using Fresnel Lenses with Minimal Refractive-Index Matching Fluid.

    Bache, Steven; Malcolm, Javian; Adamovics, John; Oldham, Mark

    2016-01-01

    Telecentric optical computed tomography (optical-CT) is a state-of-the-art method for visualizing and quantifying 3-dimensional dose distributions in radiochromic dosimeters. In this work a prototype telecentric system (DFOS-Duke Fresnel Optical-CT Scanner) is evaluated which incorporates two substantial design changes: the use of Fresnel lenses (reducing lens costs from $10-30K t0 $1-3K) and the use of a 'solid tank' (which reduces noise, and the volume of refractively matched fluid from 1ltr to 10cc). The efficacy of DFOS was evaluated by direct comparison against commissioned scanners in our lab. Measured dose distributions from all systems were compared against the predicted dose distributions from a commissioned treatment planning system (TPS). Three treatment plans were investigated including a simple four-field box treatment, a multiple small field delivery, and a complex IMRT treatment. Dosimeters were imaged within 2h post irradiation, using consistent scanning techniques (360 projections acquired at 1 degree intervals, reconstruction at 2mm). DFOS efficacy was evaluated through inspection of dose line-profiles, and 2D and 3D dose and gamma maps. DFOS/TPS gamma pass rates with 3%/3mm dose difference/distance-to-agreement criteria ranged from 89.3% to 92.2%, compared to from 95.6% to 99.0% obtained with the commissioned system. The 3D gamma pass rate between the commissioned system and DFOS was 98.2%. The typical noise rates in DFOS reconstructions were up to 3%, compared to under 2% for the commissioned system. In conclusion, while the introduction of a solid tank proved advantageous with regards to cost and convenience, further work is required to improve the image quality and dose reconstruction accuracy of the new DFOS optical-CT system. PMID:27019460

  5. Optical-CT 3D Dosimetry Using Fresnel Lenses with Minimal Refractive-Index Matching Fluid

    Bache, Steven; Malcolm, Javian; Adamovics, John; Oldham, Mark

    2016-01-01

    Telecentric optical computed tomography (optical-CT) is a state-of-the-art method for visualizing and quantifying 3-dimensional dose distributions in radiochromic dosimeters. In this work a prototype telecentric system (DFOS—Duke Fresnel Optical-CT Scanner) is evaluated which incorporates two substantial design changes: the use of Fresnel lenses (reducing lens costs from $10-30K t0 $1-3K) and the use of a ‘solid tank’ (which reduces noise, and the volume of refractively matched fluid from 1ltr to 10cc). The efficacy of DFOS was evaluated by direct comparison against commissioned scanners in our lab. Measured dose distributions from all systems were compared against the predicted dose distributions from a commissioned treatment planning system (TPS). Three treatment plans were investigated including a simple four-field box treatment, a multiple small field delivery, and a complex IMRT treatment. Dosimeters were imaged within 2h post irradiation, using consistent scanning techniques (360 projections acquired at 1 degree intervals, reconstruction at 2mm). DFOS efficacy was evaluated through inspection of dose line-profiles, and 2D and 3D dose and gamma maps. DFOS/TPS gamma pass rates with 3%/3mm dose difference/distance-to-agreement criteria ranged from 89.3% to 92.2%, compared to from 95.6% to 99.0% obtained with the commissioned system. The 3D gamma pass rate between the commissioned system and DFOS was 98.2%. The typical noise rates in DFOS reconstructions were up to 3%, compared to under 2% for the commissioned system. In conclusion, while the introduction of a solid tank proved advantageous with regards to cost and convenience, further work is required to improve the image quality and dose reconstruction accuracy of the new DFOS optical-CT system. PMID:27019460

  6. 3D MODELLING FROM UN CALIBRATED IMAGES – A COMPARATIVE STUDY

    Limi V L

    2014-03-01

    Full Text Available 3D modeling is a demanding area of research. Creating a 3D world from sequence of images captured using different mobile cameras pose additional challenge in this field. We plan to explore this area of computer vision to model a 3D world of Indian heritage sites for virtual tourism. In this paper a comparative study of the existing methods used for 3D reconstruction of un-calibrated image sequences was done. The study shows different scenario of modeling 3D objects from un-calibrated images which include community photo collection, images taken from unknown camera, 3D modeling using two un-calibrated images, etc. Hence the different methods available were studied and an overall view of the techniques used in each step of 3D reconstruction was explored. The merits and demerits of each method were also compared.

  7. Label-free 3D imaging of microstructure, blood and lymphatic vessels within tissue beds in vivo

    Zhi, Zhongwei; Jung, Yeongri; Wang, Ruikang K.

    2012-01-01

    This letter reports the use of an ultrahigh resolution optical microangiography (OMAG) system for simultaneous 3D imaging of microstructure, lymphatic and blood vessels without the use of exogenous contrast agent. An automatic algorithm is developed to segment the lymphatic vessels from the microstructural images, based on the fact that the lymph fluid is optically transparent. The OMAG system is developed that utilizes a broadband supercontinuum light source, providing an axial resolution of...

  8. 3D optical phase reconstruction within PMMA samples using a spectral OCT system

    Briones-R., Manuel d. J.; De La Torre-Ibarra, Manuel H.; Mendoza Santoyo, Fernando

    2015-08-01

    The optical coherence tomography (OCT) technique has proved to be a useful method in biomedical areas such as ophthalmology, dentistry, dermatology, among many others. In all these applications the main target is to reconstruct the internal structure of the samples from which the physician's expertise may recognize and diagnose the existence of a disease. Nowadays OCT has been applied one step further and is used to study the mechanics of some particular type of materials, where the resulting information involves more than just their internal structure and the measurement of parameters such as displacements, stress and strain. Here we report on a spectral OCT system used to image the internal 3D microstructure and displacement maps from a PMMA (Poly-methyl-methacrylate) sample, subjected to a deformation by a controlled three point bending and tilting. The internal mechanical response of the polymer is shown as consecutive 2D images.

  9. Virtual 3D interactive system with embedded multiwavelength optical sensor array and sequential devices

    Wang, Guo-Zhen; Huang, Yi-Pai; Hu, Kuo-Jui

    2012-06-01

    We proposed a virtual 3D-touch system by bare finger, which can detect the 3-axis (x, y, z) information of finger. This system has multi-wavelength optical sensor array embedded on the backplane of TFT panel and sequentail devices on the border of TFT panel. We had developed reflecting mode which can be worked by bare finger for the 3D interaction. A 4-inch mobile 3D-LCD with this proposed system was successfully been demonstrated already.

  10. DETERMINATION OF INTERNAL STRAIN IN 3-D BRAIDED COMPOSITES USING OPTIC FIBER STRAIN SENSORS

    YuanShenfang; HuangRui; LiXianghua; LiuXiaohui

    2004-01-01

    A reliable understanding of the properties of 3-D braided composites is of primary importance for proper utilization of these materials. A new method is introduced to study the mechanical performance of braided composite materials using embedded optic fiber sensors. Experimental research is performed to devise a method of incorporating optic fibers into a 3-D braided composite structure. The efficacy of this new testing method is evaluated on two counts. First,the optical performance of optic fibers is studied before and after incorporated into 3-D braided composites, as well as after completion of the manufacturing process for 3-D braided composites,to validate the ability of the optic fiber to survive the manufacturing process. On the other hand,the influence of incorporated optic fiber on the original braided composite is also researched by tension and compression experiments. Second, two kinds of optic fiber sensors are co-embedded into 3-D braided composites to evaluate their respective ability to measure the internal strain.Experimental results show that multiple optic fiber sensors can be co-braided into 3-D braided composites to determine their internal strain which is difficult to be fulfilled by other current existing methods.

  11. Four-view stereoscopic imaging and display system for web-based 3D image communication

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  12. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  13. Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes

    Niclass, Cristiano; Rochas, Alexis; Besse, Pierre-André; Charbon, Edoardo

    2005-01-01

    The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 × 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can b...

  14. 340-GHz 3D radar imaging test bed with 10-Hz frame rate

    Robertson, Duncan A.; Marsh, Paul N.; Bolton, David R.; Middleton, Robert J. C.; Hunter, Robert I.; Speirs, Peter J.; Macfarlane, David G.; Cassidy, Scott L.; Smith, Graham M.

    2012-06-01

    We present a 340 GHz 3D radar imaging test bed with 10 Hz frame rate which enables the investigation of strategies for the detection of concealed threats in high risk public areas. The radar uses a wideband heterodyne scheme and fast-scanning optics to achieve moderate resolution volumetric data sets, over a limited field of view, of targets at moderate stand-off ranges. The high frame rate is achieved through the use of DDS chirp generation, fast galvanometer scanners and efficient processing which combines CPU multi-threading and GPU-based techniques, and is sufficiently fast to follow smoothly the natural motion of people.

  15. Image Reconstruction from 2D stack of MRI/CT to 3D using Shapelets

    Arathi T; Latha Parameswaran

    2014-01-01

    Image reconstruction is an active research field, due to the increasing need for geometric 3D models in movie industry, games, virtual environments and in medical fields. 3D image reconstruction aims to arrive at the 3D model of an object, from its 2D images taken at different viewing angles. Medical images are multimodal, which includes MRI, CT scan image, PET and SPECT images. Of these, MRI and CT scan images of an organ when taken, is available as a stack of 2D images, taken at different a...

  16. Improved Uav-Borne 3d Mapping by Fusing Optical and Laserscanner Data

    Jutzi, B.; Weinmann, M.; Meidow, J.

    2013-08-01

    In this paper, a new method for fusing optical and laserscanner data is presented for improved UAV-borne 3D mapping. We propose to equip an unmanned aerial vehicle (UAV) with a small platform which includes two sensors: a standard low-cost digital camera and a lightweight Hokuyo UTM-30LX-EW laserscanning device (210 g without cable). Initially, a calibration is carried out for the utilized devices. This involves a geometric camera calibration and the estimation of the position and orientation offset between the two sensors by lever-arm and bore-sight calibration. Subsequently, a feature tracking is performed through the image sequence by considering extracted interest points as well as the projected 3D laser points. These 2D results are fused with the measured laser distances and fed into a bundle adjustment in order to obtain a Simultaneous Localization and Mapping (SLAM). It is demonstrated that an improvement in terms of precision for the pose estimation is derived by fusing optical and laserscanner data.

  17. Novel implementations of optical switch control module and 3D-CSP for 10 Gbps active optical access system

    Wakayama, Koji; Okuno, Michitaka; Matsuoka, Yasunobu; Hosomi, Kazuhiko; Sagawa, Misuzu; Sugawara, Toshiki

    2009-11-01

    We propose an optical switch control procedure for high-performance and cost-effective 10 Gbps Active Optical Access System (AOAS) in which optical switches are used instead of optical splitters in PON (Passive Optical Network). We demonstrate the implemented optical switch control module on Optical Switching Unit (OSW) with logic circuits works effectively. We also propose a compact optical 3D-CSP (Chip Scale Package) to achieve the high performance of AOAS without losing cost advantage of PON. We demonstrate the implemented 3D-CSP works effectively.

  18. Statistical skull models from 3D X-ray images

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  19. Multi Length Scale Imaging of Flocculated Estuarine Sediments; Insights into their Complex 3D Structure

    Wheatland, Jonathan; Bushby, Andy; Droppo, Ian; Carr, Simon; Spencer, Kate

    2015-04-01

    Suspended estuarine sediments form flocs that are compositionally complex, fragile and irregularly shaped. The fate and transport of suspended particulate matter (SPM) is determined by the size, shape, density, porosity and stability of these flocs and prediction of SPM transport requires accurate measurements of these three-dimensional (3D) physical properties. However, the multi-scaled nature of flocs in addition to their fragility makes their characterisation in 3D problematic. Correlative microscopy is a strategy involving the spatial registration of information collected at different scales using several imaging modalities. Previously, conventional optical microscopy (COM) and transmission electron microscopy (TEM) have enabled 2-dimensional (2D) floc characterisation at the gross (> 1 µm) and sub-micron scales respectively. Whilst this has proven insightful there remains a critical spatial and dimensional gap preventing the accurate measurement of geometric properties and an understanding of how structures at different scales are related. Within life sciences volumetric imaging techniques such as 3D micro-computed tomography (3D µCT) and focused ion beam scanning electron microscopy [FIB-SEM (or FIB-tomography)] have been combined to characterise materials at the centimetre to micron scale. Combining these techniques with TEM enables an advanced correlative study, allowing material properties across multiple spatial and dimensional scales to be visualised. The aims of this study are; 1) to formulate an advanced correlative imaging strategy combining 3D µCT, FIB-tomography and TEM; 2) to acquire 3D datasets; 3) to produce a model allowing their co-visualisation; 4) to interpret 3D floc structure. To reduce the chance of structural alterations during analysis samples were first 'fixed' in 2.5% glutaraldehyde/2% formaldehyde before being embedding in Durcupan resin. Intermediate steps were implemented to improve contrast and remove pore water, achieved by the

  20. Long-lived dipolar molecules and Feshbach molecules in a 3D optical lattice

    Chotia, Amodsen; Moses, Steven A; Yan, Bo; Covey, Jacob P; Foss-Feig, Michael; Rey, Ana Maria; Jin, Deborah S; Ye, Jun

    2011-01-01

    We have realized long-lived ground-state polar molecules in a 3D optical lattice, with a lifetime of up to 25 s, which is limited only by off-resonant scattering of the trapping light. Starting from a 2D optical lattice, we observe that the lifetime increases dramatically as a small lattice potential is added along the tube-shaped lattice traps. The 3D optical lattice also dramatically increases the lifetime for weakly bound Feshbach molecules. For a pure gas of Feshbach molecules, we observe a lifetime of >20 s in a 3D optical lattice; this represents a 100-fold improvement over previous results. This lifetime is also limited by off-resonant scattering, the rate of which is related to the size of the Feshbach molecule. Individually trapped Feshbach molecules in the 3D lattice can be converted to pairs of K and Rb atoms and back with nearly 100% efficiency.

  1. Wearable 3-D Photoacoustic Tomography for Functional Brain Imaging in Behaving Rats

    Tang, Jianbo; Jason E. Coleman; DAI, XIANJIN; Jiang, Huabei

    2016-01-01

    Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments rev...

  2. Performance Evaluating of some Methods in 3D Depth Reconstruction from a Single Image

    Wen, Wei

    2009-01-01

    We studied the problem of 3D reconstruction from a single image. The 3D reconstruction is one of the basic problems in Computer Vision. The 3D reconstruction is usually achieved by using two or multiple images of a scene. However recent researches in Computer Vision field have enabled us to recover the 3D information even from only one single image. The methods used in such reconstructions are based on depth information, projection geometry, image content, human psychology and so on. Each met...

  3. A prototype fan-beam optical CT scanner for 3D dosimetry

    Purpose: The objective of this work is to introduce a prototype fan-beam optical computed tomography scanner for three-dimensional (3D) radiation dosimetry. Methods: Two techniques of fan-beam creation were evaluated: a helium-neon laser (HeNe, λ = 543 nm) with line-generating lens, and a laser diode module (LDM, λ = 635 nm) with line-creating head module. Two physical collimator designs were assessed: a single-slot collimator and a multihole collimator. Optimal collimator depth was determined by observing the signal of a single photodiode with varying collimator depths. A method of extending the dynamic range of the system is presented. Two sample types were used for evaluations: nondosimetric absorbent solutions and irradiated polymer gel dosimeters, each housed in 1 liter cylindrical plastic flasks. Imaging protocol investigations were performed to address ring artefacts and image noise. Two image artefact removal techniques were performed in sinogram space. Collimator efficacy was evaluated by imaging highly opaque samples of scatter-based and absorption-based solutions. A noise-based flask registration technique was developed. Two protocols for gel manufacture were examined. Results: The LDM proved advantageous over the HeNe laser due to its reduced noise. Also, the LDM uses a wavelength more suitable for the PRESAGETM dosimeter. Collimator depth of 1.5 cm was found to be an optimal balance between scatter rejection, signal strength, and manufacture ease. The multihole collimator is capable of maintaining accurate scatter-rejection to high levels of opacity with scatter-based solutions (T < 0.015%). Imaging protocol investigations support the need for preirradiation and postirradiation scanning to reduce reflection-based ring artefacts and to accommodate flask imperfections and gel inhomogeneities. Artefact removal techniques in sinogram space eliminate streaking artefacts and reduce ring artefacts of up to ∼40% in magnitude. The flask registration

  4. Recent development of 3D imaging laser sensor in Mitsubishi Electric Corporation

    Imaki, M.; Kotake, N.; Tsuji, H.; Hirai, A.; Kameyama, S.

    2013-09-01

    We have been developing 3-D imaging laser sensors for several years, because they can acquire the additional information of the scene, i.e. the range data. It enhances the potential to detect unwanted people and objects, the sensors can be utilized for applications such as safety control and security surveillance, and so forth. In this paper, we focus on two types of our sensors, which are high-frame-rate type and compact-type. To realize the high-frame-rate type system, we have developed two key devices: the linear array receiver which has 256 single InAlAs-APD detectors and the read-out IC (ROIC) array which is fabricated in SiGe-BiCMOS process, and they are connected electrically to each other. Each ROIC measures not only the intensity, but also the distance to the scene by high-speed analog signal processing. In addition, by scanning the mirror mechanically in perpendicular direction to the linear image receiver, we have realized the high speed operation, in which the frame rate is over 30 Hz and the number of pixels is 256 x 256. In the compact-type 3-D imaging laser sensor development, we have succeeded in downsizing the transmitter by scanning only the laser beam with a two-dimensional MEMS scanner. To obtain wide fieldof- view image, as well as the angle of the MEMS scanner, the receiving optical system and the large area receiver are needed. We have developed the large detecting area receiver that consists of 32 rectangular detectors, where the output signals of each detector are summed up. In this phase, our original circuit evaluates each signal level, removes the low-level signals, and sums them, in order to improve the signalto- noise ratio. In the following paper, we describe the system configurations and the recent experimental results of the two types of our 3-D imaging laser sensors.

  5. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  6. Surface roughness characterization of cast components using 3D optical methods

    Nwaogu, Ugochukwu Chibuzoh; Tiedje, Niels Skat; Hansen, Hans Nørgaard

    scanning probe image processor (SPIP) software and the results of the surface roughness parameters obtained were subjected to statistical analyses. The bearing area ratio was introduced and applied to the surface roughness analysis. From the results, the surface quality of the standard comparators is......A novel method that applies a non-contact technique using a 3D optical system to measure the roughness of selected standard surface roughness comparators used in the foundry industry is presented. This method is described in detail in the paper. Profile and area analyses were performed using...... made in green sand moulds and the surface roughness parameter (Sa) values were compared with those of the standards. Sa parameter suffices for the evaluation of casting surface texture. The S series comparators showed a better description of the surface of castings after shot blasting than the A series...

  7. A Pipeline for 3D Multimodality Image Integration and Computer-assisted Planning in Epilepsy Surgery

    Nowell, Mark; Rodionov, Roman; Zombori, Gergely; Sparks, Rachel; Rizzi, Michele; Ourselin, Sebastien; Miserocchi, Anna; McEvoy, Andrew; Duncan, John

    2016-01-01

    Epilepsy surgery is challenging and the use of 3D multimodality image integration (3DMMI) to aid presurgical planning is well-established. Multimodality image integration can be technically demanding, and is underutilised in clinical practice. We have developed a single software platform for image integration, 3D visualization and surgical planning. Here, our pipeline is described in step-by-step fashion, starting with image acquisition, proceeding through image co-registration, manual segmen...

  8. D3D augmented reality imaging system: proof of concept in mammography

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  9. Fast fully 3-D image reconstruction in PET using planograms.

    Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W

    2004-04-01

    We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067

  10. Weighted 3D GS Algorithm for Image-Qquality Improvement of Multi-Plane Holographic Display

    李芳; 毕勇; 王皓; 孙敏远; 孔新新

    2012-01-01

    Theoretically,three-dimensional (3D) GS algorithm can realize 3D displays; however,correlation of the output image is restricted because of the interaction among multiple planes,thus failing to meet the image-quality requirements in practical applications.We introduce the weight factors and propose the weighted 3D GS algorithm,which can realize selective control of the correlation of multi-plane display based on the traditional 3D GS algorithm.Improvement in image quality is accomplished by the selection of appropriate weight factors.

  11. Flash trajectory imaging of target 3D motion

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  12. [3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].

    Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu

    2015-08-01

    The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449

  13. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human...

  14. Fully automatic and robust 3D registration of serial-section microscopic images

    Ching-Wei Wang; Eric Budiman Gosno; Yen-Sheng Li

    2015-01-01

    Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robu...

  15. Measurement of Capillary Length from 3D Confocal Images Using Image Analysis and Stereology

    Janáček, Jiří; Saxl, Ivan; Mao, X. W.; Kubínová, Lucie

    Valencia : University of Valencia, 2007. s. 71-71. [Focus on Microscopy FOM 2007. 10.04.2007-13.04.2007, Valencia] Institutional research plan: CEZ:AV0Z50110509; CEZ:AV0Z10190503 Keywords : spo2 * 3D image analysis * capillaries * confocal microscopy Subject RIV: EA - Cell Biology

  16. Real Time Quantitative 3-D Imaging of Diffusion Flame Species

    Kane, Daniel J.; Silver, Joel A.

    1997-01-01

    A low-gravity environment, in space or ground-based facilities such as drop towers, provides a unique setting for study of combustion mechanisms. Understanding the physical phenomena controlling the ignition and spread of flames in microgravity has importance for space safety as well as better characterization of dynamical and chemical combustion processes which are normally masked by buoyancy and other gravity-related effects. Even the use of so-called 'limiting cases' or the construction of 1-D or 2-D models and experiments fail to make the analysis of combustion simultaneously simple and accurate. Ideally, to bridge the gap between chemistry and fluid mechanics in microgravity combustion, species concentrations and temperature profiles are needed throughout the flame. However, restrictions associated with performing measurements in reduced gravity, especially size and weight considerations, have generally limited microgravity combustion studies to the capture of flame emissions on film or video laser Schlieren imaging and (intrusive) temperature measurements using thermocouples. Given the development of detailed theoretical models, more sophisticated studies are needed to provide the kind of quantitative data necessary to characterize the properties of microgravity combustion processes as well as provide accurate feedback to improve the predictive capabilities of the computational models. While there have been a myriad of fluid mechanical visualization studies in microgravity combustion, little experimental work has been completed to obtain reactant and product concentrations within a microgravity flame. This is largely due to the fact that traditional sampling methods (quenching microprobes using GC and/or mass spec analysis) are too heavy, slow, and cumbersome for microgravity experiments. Non-intrusive optical spectroscopic techniques have - up until now - also required excessively bulky, power hungry equipment. However, with the advent of near-IR diode

  17. Infrared imaging of the polymer 3D-printing process

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  18. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  19. Multi-layer 3D imaging using a few viewpoint images and depth map

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  20. Realization of real-time interactive 3D image holographic display [Invited].

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed. PMID:26835944

  1. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-01-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications. PMID:27435424

  2. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  3. MR imaging in epilepsy with use of 3D MP-RAGE

    Tanaka, Akio; Ohno, Sigeru; Sei, Tetsuro; Kanazawa, Susumu; Yasui, Koutaro; Kuroda, Masahiro; Hiraki, Yoshio; Oka, Eiji [Okayama Univ. (Japan). School of Medicine

    1996-06-01

    The patients were 40 males and 33 females; their ages ranged from 1 month to 39 years (mean: 15.7 years). The patients underwent MR imaging, including spin-echo T{sub 1}-weighted, turbo spin-echo proton density/T{sub 2}-weighted, and 3D magnetization-prepared rapid gradient-echo (3D MP-RAGE) images. These examinations disclosed 39 focal abnormalities. On visual evaluation, the boundary of abnormal gray matter in the neuronal migration disorder (NMD) cases was most clealy shown on 3D MP-RAGE images as compared to the other images. This is considered to be due to the higher spatial resolution and the better contrast of the 3D MP-RAGE images than those of the other techniques. The relative contrast difference between abnormal gray matter and the adjacent white matter was also assessed. The results revealed that the contrast differences on the 3D MP-RAGE images were larger than those on the other images; this was statistically significant. Although the sensitivity of 3D MP-RAGE for NMD was not specifically evaluated in this study, the possibility of this disorder, in cases suspected on other images, could be ruled out. Thus, it appears that the specificity with respect to NMD was at least increased with us of 3D MP-RAGE. 3D MP-RAGE also enabled us to build three-dimensional surface models that were helpful in understanding the three-dimensional anatomy. Furthermore. 3D MP-RAGE was considered to be the best technique for evaluating hippocampus atrophy in patients with MTS. On the other hand, the sensitivity in the signal change of the hippocampus was higher on T{sub 2}-weighted images. In addition, demonstration of cortical tubers of tuberous sclerosis in neurocutaneous syndrome was superior on T{sub 2}-weighted images than on 3D MP-RAGE images. (K.H.)

  4. MR imaging in epilepsy with use of 3D MP-RAGE

    The patients were 40 males and 33 females; their ages ranged from 1 month to 39 years (mean: 15.7 years). The patients underwent MR imaging, including spin-echo T1-weighted, turbo spin-echo proton density/T2-weighted, and 3D magnetization-prepared rapid gradient-echo (3D MP-RAGE) images. These examinations disclosed 39 focal abnormalities. On visual evaluation, the boundary of abnormal gray matter in the neuronal migration disorder (NMD) cases was most clealy shown on 3D MP-RAGE images as compared to the other images. This is considered to be due to the higher spatial resolution and the better contrast of the 3D MP-RAGE images than those of the other techniques. The relative contrast difference between abnormal gray matter and the adjacent white matter was also assessed. The results revealed that the contrast differences on the 3D MP-RAGE images were larger than those on the other images; this was statistically significant. Although the sensitivity of 3D MP-RAGE for NMD was not specifically evaluated in this study, the possibility of this disorder, in cases suspected on other images, could be ruled out. Thus, it appears that the specificity with respect to NMD was at least increased with us of 3D MP-RAGE. 3D MP-RAGE also enabled us to build three-dimensional surface models that were helpful in understanding the three-dimensional anatomy. Furthermore. 3D MP-RAGE was considered to be the best technique for evaluating hippocampus atrophy in patients with MTS. On the other hand, the sensitivity in the signal change of the hippocampus was higher on T2-weighted images. In addition, demonstration of cortical tubers of tuberous sclerosis in neurocutaneous syndrome was superior on T2-weighted images than on 3D MP-RAGE images. (K.H.)

  5. Automatic extraction of abnormal signals from diffusion-weighted images using 3D-ACTIT

    Recent developments in medical imaging equipment have made it possible to acquire large amounts of image data and to perform detailed diagnosis. However, it is difficult for physicians to evaluate all of the image data obtained. To address this problem, computer-aided detection (CAD) and expert systems have been investigated. In these investigations, as the types of images used for diagnosis has expanded, the requirements for image processing have become more complex. We therefore propose a new method which we call Automatic Construction of Tree-structural Image Transformation (3D-ACTIT) to perform various 3D image processing procedures automatically using instance-based learning. We have conducted research on diffusion-weighted image (DWI) data and its processing. In this report, we describe how 3D-ACTIT performs processing to extract only abnormal signal regions from 3D-DWI data. (author)

  6. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  7. 3D Imaging of individual particles : a review

    Pirard, Eric

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear disti...

  8. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Eric Pirard

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. Th...

  9. 3D imaging of individual particles: a review:

    Pirard, Eric

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. Th...

  10. 3D Imaging in Heavy-Ion Reactions

    Brown, David A.; Danielewicz, Pawel; Heffner, Mike; Soltz, Ron

    2004-01-01

    We report an extension of the source imaging method for imaging full three-dimensional sources from three-dimensional like-pair correlations. Our technique consists of expanding the correlation data and the underlying source function in spherical harmonics and inverting the resulting system of one-dimensional integral equations. With this method of attack, we can image the source function quickly, even with the extremely large data sets common in three-dimensional analyses. We apply our metho...

  11. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  12. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR.

    Yun, G S; Lee, W; Choi, M J; Lee, J; Kim, M; Leem, J; Nam, Y; Choe, G H; Park, H K; Park, H; Woo, D S; Kim, K W; Domier, C W; Luhmann, N C; Ito, N; Mase, A; Lee, S G

    2014-11-01

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B0 = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE. PMID:25430233

  13. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR

    Yun, G. S., E-mail: gunsu@postech.ac.kr; Choi, M. J.; Lee, J.; Kim, M.; Leem, J.; Nam, Y.; Choe, G. H. [Department of Physics, Pohang University of Science and Technology, Pohang 790-784 (Korea, Republic of); Lee, W.; Park, H. K. [Ulsan National Institute of Science and Technology, Ulsan 689-798 (Korea, Republic of); Park, H.; Woo, D. S.; Kim, K. W. [School of Electrical Engineering, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California, Davis, California 95616 (United States); Ito, N. [KASTEC, Kyushu University, Kasuga-shi, Fukuoka 812-8581 (Japan); Mase, A. [Ube National College of Technology, Ube-shi, Yamaguchi 755-8555 (Japan); Lee, S. G. [National Fusion Research Institute, Daejeon 305-333 (Korea, Republic of)

    2014-11-15

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B{sub 0} = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE.

  14. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated

  15. Efficient RPG detection in noisy 3D image data

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  16. 3D Wavelet-based Fusion Techniques for Biomedical Imaging

    Rubio Guivernau, José Luis

    2012-01-01

    Hoy en día las técnicas de adquisición de imágenes tridimensionales son comunes en diversas áreas, pero cabe destacar la relevancia que han adquirido en el ámbito de la imagen biomédica, dentro del cual encontramos una amplia gama de técnicas como la microscopía confocal, microscopía de dos fotones, microscopía de fluorescencia mediante lámina de luz, resonancia magnética nuclear, tomografía por emisión de positrones, tomografía de coherencia óptica, ecografía 3D y un largo etcétera. Un denom...

  17. Improvement of integral 3D image quality by compensating for lens position errors

    Okui, Makoto; Arai, Jun; Kobayashi, Masaki; Okano, Fumio

    2004-05-01

    Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device with about 18,000 lenses using a super-high-definition camera with 2,000 scanning lines. Positional errors of these high-precision lenses as well as the camera's lenses will cause distortions in the elemental image, which directly affect the quality of the 3-D image and the viewing area. We have devised a way to compensate for such geometrical position errors and used it for the integral 3-D TV prototype, resulting in an improvement in both viewing zone and picture quality.

  18. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  19. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    2013-01-01

    An ultrasound imaging system (300) includes a transducer array (302) with a two- dimensional array of transducer elements configured to transmit an ultrasound signal and receive echoes, transmit circuitry (304) configured to control the transducer array to transmit the ultrasound signal so as to...... the same received set of two dimensional echoes form part of the imaging system...

  20. 3D image reconstruction of fiber systems using electron tomography

    Over the past several years, electron microscopists and materials researchers have shown increased interest in electron tomography (reconstruction of three-dimensional information from a tilt series of bright field images obtained in a transmission electron microscope (TEM)). In this research, electron tomography has been used to reconstruct a three-dimensional image for fiber structures from secondary electron images in a scanning electron microscope (SEM). The implementation of this technique is used to examine the structure of fiber system before and after deformation. A test sample of steel wool was tilted around a single axis from −10° to 60° by one-degree steps with images taken at every degree; three-dimensional images were reconstructed for the specimen of fine steel fibers. This method is capable of reconstructing the three-dimensional morphology of this type of lineal structure, and to obtain features such as tortuosity, contact points, and linear density that are of importance in defining the mechanical properties of these materials. - Highlights: • The electron tomography technique has been adapted to the SEM for analysis of linear structures. • Images are obtained by secondary electron imaging through a given depth of field, making them analogous to projected images. • Quantitative descriptions of the microstructure can be obtained including tortuosity and contact points per volume

  1. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  2. Analytic 3D image reconstruction using all detected events

    We present the results of testing a previously presented algorithm for three-dimensional image reconstruction that uses all gamma-ray coincidence events detected by a PET volume-imaging scanner. By using two iterations of an analytic filter-backprojection method, the algorithm is not constrained by the requirement of a spatially invariant detector point spread function, which limits normal analytic techniques. Removing this constraint allows the incorporation of all detected events, regardless of orientation, which improves the statistical quality of the final reconstructed image

  3. Towards the 3D-Imaging of Sources

    Danielewicz, P; Heffner, M; Pratt, S; Soltz, R A

    2004-01-01

    Geometric details of a nuclear reaction zone, at the time of particle emission, can be restored from low relative-velocity particle-correlations, following imaging. Some of the source details get erased and are a potential cause of problems in the imaging, in the form of instabilities. These can be coped with by following the method of discretized optimization for the restored sources. So far it has been possible to produce 1-dimensional emission source images, corresponding to the reactions averaged over all possible spatial directions. Currently, efforts are in progress to restore angular details.

  4. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  5. Real-time auto-stereoscopic visualization of 3D medical images

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  6. 2D and 3D micro-XRF based on polycapillary optics at XLab Frascati

    Polese, C.; Cappuccio, G.; Dabagov, S. B.; Hampai, D.; Liedl, A.; Pace, E.

    2015-08-01

    XRF imaging spectrometry is a powerful tool for materials characterization. A high spatial resolution is often required, in order to appreciate very tiny details of the studied object. With respect to simple pinholes, polycapillary optics allows much more intense fluxes to be achieved. This is fundamental to detect elements in trace and to strongly reduce the global acquisition time that is actually among the main reasons, in addition to radioprotection issues, affecting the competitiveness of XRF imaging with respect to other faster imaging techniques such as multispectral imaging. Unlike other well-known X-ray optics, principally employed for high brilliant radiation source such as synchrotron facilities, polyCO can be efficiently coupled also with conventional X-ray tubes. All these aspects make them the most suitable choice to realize portable, safe and high performing μXRF spectrometers. In this work preliminary results achieved with a novel 2D and 3D XRF facility, called Rainbow X-Ray (RXR), are reported, with particular attention to the spatial resolution achieved. RXR is based on the confocal arrangement of three polycapillary lenses, one focusing the primary beam and the other two capturing the fluorescence signal. The detection system is split in two couples of lens-detector in order to cover a wider energy range. The entire device is a laboratory user-friendly facility and, though it allows measurements on medium-size objects, its dimensions do not preclude it to be transported for in situ analysis on request, thanks also to a properly shielded cabinet.

  7. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    Armando Viviano Razionale

    2013-02-01

    Full Text Available In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces through the digitalization of both patients’ mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  8. Building Extraction from DSM Acquired by Airborne 3D Image

    YOU Hongjian; LI Shukai

    2003-01-01

    Segmentation and edge regulation are studied deeply to extract buildings from DSM data produced in this paper. Building segmentation is the first step to extract buildings, and a new segmentation method-adaptive iterative segmentation considering ratio mean square-is proposed to extract the contour of buildings effectively. A sub-image (such as 50× 50 pixels )of the image is processed in sequence,the average gray level and its ratio mean square are calculated first, then threshold of the sub-image is selected by using iterative threshold segmentation. The current pixel is segmented according to the threshold, the aver-age gray level and the ratio mean square of the sub-image. The edge points of the building are grouped according to the azimuth of neighbor points, and then the optimal azimuth of the points that belong to the same group can be calculated by using line interpolation.

  9. Online reconstruction of 3D magnetic particle imaging data.

    Knopp, T; Hofmann, M

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668

  10. Online reconstruction of 3D magnetic particle imaging data

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  11. Computational 3D and reflectivity imaging with high photon efficiency

    Shin, Dongeek; Kirmani, Ahmed; Shapiro, Jeffrey H.; Goyal, Vivek K

    2014-01-01

    Capturing depth and reflectivity images at low light levels from active illumination of a scene has wide-ranging applications. Conventionally, even with single-photon detectors, hundreds of photon detections are needed at each pixel to mitigate Poisson noise. We introduce a robust method for estimating depth and reflectivity using on the order of 1 detected photon per pixel averaged over the scene. Our computational imager combines physically accurate single-photon counting statistics with ex...

  12. Critical Comparison of 3-d Imaging Approaches for NGST

    Bennett, Charles L.

    1999-01-01

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; b...

  13. Improved 3D cellular imaging by multispectral focus assessment

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  14. 3D Surface Imaging of the Human Female Torso in Upright to Supine Positions

    Reece, Gregory P.; Merchant, Fatima; Andon, Johnny; Khatam, Hamed; Ravi-Chandar, K.; Weston, June; Fingeret, Michelle C.; Lane, Chris; Duncan, Kelly; Markey, Mia K.

    2015-01-01

    Three-dimensional (3D) surface imaging of breasts is usually done with the patient in an upright position, which does not permit comparison of changes in breast morphology with changes in position of the torso. In theory, these limitations may be eliminated if the 3D camera system could remain fixed relative to the woman’s torso as she is tilted from 0 to 90 degrees. We mounted a 3dMDtorso imaging system onto a bariatric tilt table to image breasts at different tilt angles. The images were va...

  15. First images and orientation of internal waves from a 3-D seismic oceanography data set

    T. M. Blacic

    2009-10-01

    Full Text Available We present 3-D images of ocean finestructure from a unique industry-collected 3-D multichannel seismic dataset from the Gulf of Mexico that includes expendable bathythermograpgh casts for both swaths. 2-D processing reveals strong laterally continuous reflectors throughout the upper ~800 m as well as a few weaker but still distinct reflectors as deep as ~1100 m. Two bright reflections are traced across the 225-m-wide swath to produce reflector surface images that show the 3-D structure of internal waves. We show that the orientation of internal wave crests can be obtained by calculating the orientations of contours of reflector relief. Preliminary 3-D processing further illustrates the potential of 3-D seismic data in interpreting images of oceanic features such as internal wave strains. This work demonstrates the viability of imaging oceanic finestructure in 3-D and shows that, beyond simply providing a way to see what oceanic finestructure looks like, quantitative information such as the spatial orientation of features like internal waves and solitons can be obtained from 3-D seismic images. We expect complete, optimized 3-D processing to improve both the signal to noise ratio and spatial resolution of our images resulting in increased options for analysis and interpretation.

  16. First images and orientation of fine structure from a 3-D seismic oceanography data set

    T. M. Blacic

    2010-04-01

    Full Text Available We present 3-D images of ocean fine structure from a unique industry-collected 3-D multichannel seismic dataset from the Gulf of Mexico that includes expendable bathythermograph casts for both swaths. 2-D processing reveals strong laterally continuous reflections throughout the upper ~800 m as well as a few weaker but still distinct reflections as deep as ~1100 m. We interpret the reflections to be caused by reversible fine structure from internal wave strains. Two bright reflections are traced across the 225-m-wide swath to produce reflection surface images that illustrate the 3-D nature of ocean fine structure. We show that the orientation of linear features in a reflection can be obtained by calculating the orientations of contours of reflection relief, or more robustly, by fitting a sinusoidal surface to the reflection. Preliminary 3-D processing further illustrates the potential of 3-D seismic data in interpreting images of oceanic features such as internal wave strains. This work demonstrates the viability of imaging oceanic fine structure in 3-D and shows that, beyond simply providing a way visualize oceanic fine structure, quantitative information such as the spatial orientation of features like fronts and solitons can be obtained from 3-D seismic images. We expect complete, optimized 3-D processing to improve both the signal to noise ratio and spatial resolution of our images resulting in increased options for analysis and interpretation.

  17. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-01-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters r...

  18. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation

    Calafiore, Giuseppe; Koshelev, Alexander; Allen, Frances I.; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-09-01

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three-dimensional structure achieved by direct nanoimprint lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the good lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enable advancements in areas such as integrated optics and sensing, achieving enhanced portability and versatility of fiber optic components.

  19. Radar Imaging of Spheres in 3D using MUSIC

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  20. Multithreaded real-time 3D image processing software architecture and implementation

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  1. 3D-CT imaging processing for qualitative and quantitative analysis of maxillofacial cysts and tumors

    The objective of this study was to evaluate spiral-computed tomography (3D-CT) images of 20 patients presenting with cysts and tumors in the maxillofacial complex, in order to compare the surface and volume techniques of image rendering. The qualitative and quantitative appraisal indicated that the volume technique allowed a more precise and accurate observation than the surface method. On the average, the measurements obtained by means of the 3D volume-rendering technique were 6.28% higher than those obtained by means of the surface method. The sensitivity of the 3D surface technique was lower than that of the 3D volume technique for all conditions stipulated in the diagnosis and evaluation of lesions. We concluded that the 3D-CT volume rendering technique was more reproducible and sensitive than the 3D-CT surface method, in the diagnosis, treatment planning and evaluation of maxillofacial lesions, especially those with intra-osseous involvement. (author)

  2. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  3. 3-D capacitance density imaging of fluidized bed

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  4. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  5. Detection of tibial condylar fractures using 3D imaging with a mobile image amplifier (Siemens ISO-C-3D): Comparison with plain films and spiral CT

    Purpose: To analyze a prototype mobile C-arm 3D image amplifier in the detection and classification of experimental tibial condylar fractures with multiplanar reconstructions (MPR). Method: Human knee specimens (n=22) with tibial condylar fractures were examined with a prototype C-arm (ISO-C-3D, Siemens AG), plain films (CR) and spiral CT (CT). The motorized C-arm provides fluoroscopic images during a 190 orbital rotation computing a 119 mm data cube. From these 3D data sets MP reconstructions were obtained. All images were evaluated by four independent readers for the detection and assessment of fracture lines. All fractures were classified according to the Mueller AO classification. To confirm the results, the specimens were finally surgically dissected. Results: 97% of the tibial condylar fractures were easily seen and correctly classified according to the Mueller AO classification on MP reconstruction of the ISO-C-3D. There is no significant difference between ISO-C and CT in detection and correct classification of fractures, but ISO-CD-3D is significant by better than CR. (orig.)

  6. Space Radar Image of Kilauea, Hawaii in 3-D

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  7. 3D microscopic imaging and evaluation of tubular tissue architecture

    Janáček, Jiří; Čapek, Martin; Michálek, Jan; Karen, Petr; Kubínová, Lucie

    2014-01-01

    Roč. 63, Suppl.1 (2014), S49-S55. ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LH13028; GA ČR(CZ) GA13-12412S Institutional support: RVO:67985823 Keywords : confocal microscopy * capillaries * brain * skeletal muscle * image analysis Subject RIV: EA - Cell Biology Impact factor: 1.293, year: 2014

  8. Task-specific evaluation of 3D image interpolation techniques

    Grevera, George J.; Udupa, Jayaram K.; Miki, Yukio

    1998-06-01

    Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. At a previous meeting, we presented a framework for the task independent comparison of interpolation methods based on a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this new work, we present an objective, task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of Multiple Sclerosis (MS) patients. Sixty lesion detection experiments coming from ten patient studies, two subsampling techniques and the original data, and 3 interpolation methods is presented along with a statistical analysis of the results. This work comprises a systematic framework for the task-specific comparison of interpolation methods. Specifically, the influence of three interpolation methods in MS lesion quantification is compared.

  9. Segmentation of Carotid Arteries from 3D and 4D Ultrasound Images

    Mattsson, Per; Eriksson, Andreas

    2002-01-01

    This thesis presents a 3D semi-automatic segmentation technique for extracting the lumen surface of the Carotid arteries including the bifurcation from 3D and 4D ultrasound examinations. Ultrasound images are inherently noisy. Therefore, to aid the inspection of the acquired data an adaptive edge preserving filtering technique is used to reduce the general high noise level. The segmentation process starts with edge detection with a recursive and separable 3D Monga-Deriche-Canny operator. To r...

  10. Understanding Immersivity: Image Generation and Transformation Processes in 3D Immersive Environments

    Kozhevnikov, Maria; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentatio...

  11. Understanding immersivity: Image generation and transformation processes in 3D immersive environments

    Maria eKozhevnikov; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard & Metzler (1971) mental rotation task across the following three types of visual presentation enviro...

  12. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    S. P. Singh; K. Jain; V. R. Mandla

    2014-01-01

    3D city model is a digital representation of the Earth's surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based m...

  13. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    Lee, Sangyun; Kim, Kyoohyun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified inclu...

  14. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  15. 3D reconstruction and characterization of laser induced craters by in situ optical microscopy

    Casal, A.; Cerrato, R.; Mateo, M. P.; Nicolas, G.

    2016-06-01

    A low-cost optical microscope was developed and coupled to an irradiation system in order to study the induced effects on material during a multipulse regime by an in situ visual inspection of the surface, in particular of the spot generated at different pulses. In the case of laser ablation, a reconstruction of the crater in 3D was made from the images of the sample surface taken during the irradiation process, and the subsequent profiles of ablated material were extracted. The implementation of this homemade optical device gives an added value to the irradiation system, providing information about morphology evolution of irradiated area when successive pulses are applied. In particular, the determination of ablation rates in real time can be especially useful for a better understanding and controlling of the ablation process in applications where removal of material is involved, such as laser cleaning and in-depth characterization of multilayered samples and diffusion processes. The validation of the developed microscope was made by a comparison with a commercial confocal microscope configured for the characterization of materials where similar results of crater depth and diameter were obtained for both systems.

  16. Space Radar Image of Long Valley, California - 3D view

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  17. Space Radar Image of Long Valley, California in 3-D

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are

  18. Space Radar Image of Missoula, Montana in 3-D

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA

  19. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  20. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  1. 3D printing of tissue-simulating phantoms as a traceable standard for biomedical optical measurement

    Dong, Erbao; Wang, Minjie; Shen, Shuwei; Han, Yilin; Wu, Qiang; Xu, Ronald

    2016-01-01

    Optical phantoms are commonly used to validate and calibrate biomedical optical devices in order to ensure accurate measurement of optical properties in biological tissue. However, commonly used optical phantoms are based on homogenous materials that reflect neither optical properties nor multi-layer heterogeneities of biological tissue. Using these phantoms for optical calibration may result in significant bias in biological measurement. We propose to characterize and fabricate tissue simulating phantoms that simulate not only the multi-layer heterogeneities but also optical properties of biological tissue. The tissue characterization module detects tissue structural and functional properties in vivo. The phantom printing module generates 3D tissue structures at different scales by layer-by-layer deposition of phantom materials with different optical properties. The ultimate goal is to fabricate multi-layer tissue simulating phantoms as a traceable standard for optimal calibration of biomedical optical spectral devices.

  2. Analysis of 3D confocal images of capillaries

    Janáček, Jiří; Saxl, Ivan; Mao, X. W.; Eržen, I.; Kubínová, Lucie

    Saint-Etienne : International society for stereology, 2007, s. 12-15. [International congress for stereology /12./. Saint-Etienne (FR), 03.09.2007-07.09.2007] R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509; CEZ:AV0Z10190503 Keywords : capillaries * confocal microscopy * image analysis Subject RIV: EA - Cell Biology

  3. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection

    Grzegorczyk, Tomasz M.; Meaney, Paul M.; Kaufman, Peter A.; DiFlorio-Alexander, Roberta M.; Paulsen, Keith D.

    2012-01-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to ...

  4. Study of bone implants based on 3D images

    Grau, S; Ayala Vallespí, M. Dolors; Tost Pardell, Daniela; Miño, N.; Muñoz, F.; González, A

    2005-01-01

    New medical input technologies together with computer graphics modelling and visualization software have opened a new track for biomedical sciences: the so-called in-silice experimentation, in which analysis and measurements are done on computer graphics models constructed on the basis of medical images, complementing the traditional in-vivo and in-vitro experimental methods. In this paper, we describe an in-silice experiment to evaluate bio-implants f...

  5. Evaluation of a new method for stenosis quantification from 3D x-ray angiography images

    Betting, Fabienne; Moris, Gilles; Knoplioch, Jerome; Trousset, Yves L.; Sureda, Francisco; Launay, Laurent

    2001-05-01

    A new method for stenosis quantification from 3D X-ray angiography images has been evaluated on both phantom and clinical data. On phantoms, for the parts larger or equal to 3 mm, the standard deviation of the measurement error has always found to be less or equal to 0.4 mm, and the maximum measurement error less than 0.17 mm. No clear relationship has been observed between the performances of the quantification method and the acquisition FoV. On clinical data, the 3D quantification method proved to be more robust to vessel bifurcations than its 3D equivalent. On a total of 15 clinical cases, the differences between 2D and 3D quantification were always less than 0.7 mm. The conclusion is that stenosis quantification from 3D X-4ay angiography images is an attractive alternative to quantification from 2D X-ray images.

  6. Optical Measurement of Micromechanics and Structure in a 3D Fibrin Extracellular Matrix

    Kotlarchyk, Maxwell Aaron

    2011-07-01

    In recent years, a significant number of studies have focused on linking substrate mechanics to cell function using standard methodologies to characterize the bulk properties of the hydrogel substrates. However, current understanding of the correlations between the microstructural mechanical properties of hydrogels and cell function in 3D is poor, in part because of a lack of appropriate techniques. Methods for tuning extracellular matrix (ECM) mechanics in 3D cell culture that rely on increasing the concentration of either protein or cross-linking molecules fail to control important parameters such as pore size, ligand density, and molecular diffusivity. Alternatively, ECM stiffness can be modulated independently from protein concentration by mechanically loading the ECM. We have developed an optical tweezers-based microrheology system to investigate the fundamental role of ECM mechanical properties in determining cellular behavior. Further, this thesis outlines the development of a novel device for generating stiffness gradients in naturally derived ECMs, where stiffness is tuned by inducing strain, while local structure and mechanical properties are directly determined by laser tweezers-based passive and active microrheology respectively. Hydrogel substrates polymerized within 35 mm diameter Petri dishes are strained non-uniformly by the precise rotation of an embedded cylindrical post, and exhibit a position-dependent stiffness with little to no modulation of local mesh geometry. Here we present microrheological studies in the context of fibrin hydrogels. Microrheology and confocal imaging were used to directly measure local changes in micromechanics and structure respectively in unstrained hydrogels of increasing fibrinogen concentration, as well as in our strain gradient device, in which the concentration of fibrinogen is held constant. Orbital particle tracking, and raster image correlation analysis are used to quantify changes in fibrin mechanics on the

  7. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  8. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  9. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  10. A framework for human spine imaging using a freehand 3D ultrasound system

    Purnama, Ketut E.; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; Ooijen, van Peter M.A.; Lubbers, Jaap; Burgerhof, Johannes G.M.; Sardjono, Tri A.; Verkerke, Gijsbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  11. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Ruisong Ye; Wenping Yu

    2012-01-01

    An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency d...

  12. A New Approach for 3D Range Image Segmentation using Gradient Method

    Dina A. Hafiz

    2011-01-01

    Full Text Available Problem statement: Segmentation of 3D range images is widely used in computer vision as an essential pre-processing step before the methods of high-level vision can be applied. Segmentation aims to study and recognize the features of range image such as 3D edges, connected surfaces and smooth regions. Approach: This study presents new improvements in segmentation of terrestrial 3D range images based on edge detection technique. The main idea is to apply a gradient edge detector in three different directions of the 3D range images. This 3D gradient detector is a generalization of the classical sobel operator used with 2D images, which is based on the differences of normal vectors or geometric locations in the coordinate directions. The proposed algorithm uses a 3D-grid structure method to handle large amount of unordered sets of points and determine neighborhood points. It segments the 3D range images directly using gradient edge detectors without any further computations like mesh generation. Our algorithm focuses on extracting important linear structures such as doors, stairs and windows from terrestrial 3D range images these structures are common in indoors and outdoors in many environments. Results: Experimental results showed that the proposed algorithm provides a new approach of 3D range image segmentation with the characteristics of low computational complexity and less sensitivity to noise. The algorithm is validated using seven artificially generated datasets and two real world datasets. Conclusion/Recommendations: Experimental results showed that different segmentation accuracy is achieved by using higher Grid resolution and adaptive threshold.

  13. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. (paper)

  14. HERMES Results on the 3D Imaging of the Nucleon

    Pappalardo, L. L.

    2016-07-01

    It the last decades, a formalism of transverse momentum dependent parton distribution functions (TMDs) and of generalised parton distributions (GPDs) has been developed in the context of non-perturbative QCD, opening the way for a tomographic imaging of the nucleon structure. TMDs and GPDs provide complementary three-dimensional descriptions of the nucleon structure in terms of parton densities. They thus contribute, with different approaches, to the understanding of the full phase-space distribution of partons. A selection of HERMES results sensitive to TMDs is presented.

  15. 3D Synchrotron Imaging of a Directionally Solidified Ternary Eutectic

    Dennstedt, Anne; Helfen, Lukas; Steinmetz, Philipp; Nestler, Britta; Ratke, Lorenz

    2016-03-01

    For the first time, the microstructure of directionally solidified ternary eutectics is visualized in three dimensions, using a high-resolution technique of X-ray tomography at the ESRF. The microstructure characterization is conducted with a photon energy, allowing to clearly discriminate the three phases Ag2Al, Al2Cu, and α-Aluminum solid solution. The reconstructed images illustrate the three-dimensional arrangement of the phases. The Ag2Al lamellae perform splitting and merging as well as nucleation and disappearing events during directional solidification.

  16. MR Imaging of the Internal Auditory Canal and Inner Ear at 3T: Comparison between 3D Driven Equilibrium and 3D Balanced Fast Field Echo Sequences

    Byun, Jun Soo; Kim, Hyung Jin; Yim, Yoo Jeong; Kim, Sung Tae; Jeon, Pyoung; Kim, Keon Ha [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Kim, Sam Soo; Jeon, Yong Hwan; Lee, Ji Won [Kangwon National University College of Medicine, Chuncheon (Korea, Republic of)

    2008-06-15

    To compare the use of 3D driven equilibrium (DRIVE) imaging with 3D balanced fast field echo (bFFE) imaging in the assessment of the anatomic structures of the internal auditory canal (IAC) and inner ear at 3 Tesla (T). Thirty ears of 15 subjects (7 men and 8 women; age range, 22 71 years; average age, 50 years) without evidence of ear problems were examined on a whole-body 3T MR scanner with both 3D DRIVE and 3D bFFE sequences by using an 8-channel sensitivity encoding (SENSE) head coil. Two neuroradiologists reviewed both MR images with particular attention to the visibility of the anatomic structures, including four branches of the cranial nerves within the IAC, anatomic structures of the cochlea, vestibule, and three semicircular canals. Although both techniques provided images of relatively good quality, the 3D DRIVE sequence was somewhat superior to the 3D bFFE sequence. The discrepancies were more prominent for the basal turn of the cochlea, vestibule, and all semicircular canals, and were thought to be attributed to the presence of greater magnetic susceptibility artifacts inherent to gradient-echo techniques such as bFFE. Because of higher image quality and less susceptibility artifacts, we highly recommend the employment of 3D DRIVE imaging as the MR imaging choice for the IAC and inner ear

  17. 3D dosimetry for complex stereotactic radiosurgery using a tomographic optical density scanner and BANG polymer gels

    Purpose: Radiation sensitive tissue equivalent BANG polymer gels (MGS Research, Inc., Guilford, CT) have been developed for three dimensional verification of complex radiotherapy treatment plans. This study evaluated the performance of a prototype optical density scanner in verification of a complex radiosurgery treatment plan using linear accelerator based radiosurgery and BANG polymer gel dosimeters. Materials and Methods: BANG polymer gel dosimeters were treated with stereotactic radiosurgery using 6MV photons to single isocenters and to a 3 isocenter radiosurgery plan. Appropriate controls for evaluating the linearity of dose response were irradiated using a water bath and 6MV photons. Two separate methods for imaging the radiation-induced polymerization in the gel were used. The first method, MRI imaging, used the spatial distribution of the NMR transverse relaxation rates (R2) of the water protons in the gel to create a 3D dose map. In the second, a prototype optical density scanner was used to reconstruct a 3D dose distribution from multiple planar images of the gel which were generated using a filtered back-projection algorithm and measurements of optical transmission. Results: Data obtained from MRI imaging and the images generated by the optical scanner were compared with the plan with excellent results. Very close agreement between all three data sets was demonstrated. The BANG polymer gels demonstrated an excellent linearity of response and a very large (∼20 Gray) dynamic range. Conclusion: The ability to permanently record (and interrogate at a later time) integrated 3D dose distributions will be valuable in assessing complex external beam treatment plans such as radiosurgical treatment plans as well as in commissioning and periodic checking of dynamic wedges, multileaf collimators, etc. used for conventionally fractionated conformal radiotherapy. The linearity of response and wide dynamic range are important in the evaluation of radiosurgical

  18. Two-Photon Absorbing Molecules as Potential Materials for 3D Optical Memory

    Kazuya Ogawa

    2014-01-01

    Full Text Available In this review, recent advances in two-photon absorbing photochromic molecules, as potential materials for 3D optical memory, are presented. The investigations introduced in this review indicate that 3D data storage processing at the molecular level is possible. As 3D memory using two-photon absorption allows advantages over existing systems, the use of two-photon absorbing photochromic molecules is preferable. Although there are some photochromic molecules with good properties for memory, in most cases, the two-photon absorption efficiency is not high. Photochromic molecules with high two-photon absorption efficiency are desired. Recently, molecules having much larger two-photon absorption cross sections over 10,000 GM (GM= 10−50 cm4 s molecule−1 photon−1 have been discovered and are expected to open the way to realize two-photon absorption 3D data storage.

  19. Parallel 3-D image processing for nuclear emulsion

    The history of nuclear plate was explained. The first nuclear plate was named as pellicles covered with 600 μm of emulsion in Europe. In Japan Emulsion Cloud Chamber (ECC) using thin emulsion (50 μm) type nuclear plate was developed in 1960. Then, the semi-automatic analyzer (1971) and automatic analyzer (1980), Track Selector (TS) with memory stored 16 layer images in 512 x 512 x 16 pixel were developed. Moreover, NTS (New Track Selector), speeding up analyzer, was produced for analysis of results of CHORUS experiment in 1996. Simultaneous readout of 16 layer images had been carried out, but UTS (Ultra Track Selector) made possible to progressive treatment of 16 layers of some data and determination of traces in all angles. Direct detection of tau neutrino (VT) was studied by DONUT (FNAL E872) using UTS and nuclear plate. Neutrino beam was produced by 800 GeV proton beam hitting the fixed target. About 1100 phenomena of neutrino reactions were observed during six months of irradiation. 203 phenomena were detected. 4 examples were shown in this paper. OPERA experiment by SK is explained. (S.Y.)

  20. 3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging

    Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.

    2014-01-01

    Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

  1. Contributions in compression of 3D medical images and 2D images

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  2. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation

    Calafiore, Giuseppe; Allen, Frances I; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-01-01

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three dimensional structure achieved by direct Nanoimprint Lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the excellent lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enabl...

  3. D3D augmented reality imaging system: proof of concept in mammography

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  4. 3-D Structure of Sunspots Using Imaging Spectroscopy

    Balasubramaniam, K. S.; Gary, G. A.; Reardon, K.

    2006-12-01

    We use the Interferometric BIdimensional Spectrometer (IBIS) of the INAF/Arcetri Astrophysical Observatory and installed at the National Solar Observatory (NSO) Dunn Solar Telescope, to understand the structure of sunspots. Using the spectral lines Fe I 6301.5 Å, Fe II 7224.4 Å, and Ca II 8542.6 Å, we examine the spectroscopic variation of sunspot penumbral and umbral structures at the heights of formation of these lines. These high resolution observations were acquired on 2004 July 30 -- 31, of active region NOAA 10654, using the high order NSO adaptive optics system. We map the spatio-temporal variation of Doppler signatures in these spectral lines, from the photosphere to the chromosphere. From a 70-minute temporal average of individual 32-second cadence Doppler observations we find that the averaged velocities decrease with height. They are about 3.5 times larger in the deeper photosphere (Fe II 7224.4 Å; height-of-formation ≈ 50 km) than in the upper photosphere Fe I 6301.5 Å; height-of-formation ≈ 350 km), There is a remarkable coherence of Doppler signals over the height difference of 300 km. From a high-speed animation of the Doppler sequence we find evidence for what appears to be ejection of high speed gas concentrations from edges of penumbral filaments into the surrounding granular photosphere. The Evershed flow persists a few arcseconds beyond the traditionally demarcated penumbra-granulation boundary. We present these and other results and discuss the implications of these measurements for sunspot models.

  5. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). (paper)

  6. A high-level 3D visualization API for Java and ImageJ

    Longair Mark

    2010-05-01

    Full Text Available Abstract Background Current imaging methods such as Magnetic Resonance Imaging (MRI, Confocal microscopy, Electron Microscopy (EM or Selective Plane Illumination Microscopy (SPIM yield three-dimensional (3D data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  7. Hands-on guide for 3D image creation for geological purposes

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  8. A Compact, Wide Area Surveillance 3D Imaging LIDAR Providing UAS Sense and Avoid Capabilities Project

    National Aeronautics and Space Administration — Eye safe 3D Imaging LIDARS when combined with advanced very high sensitivity, large format receivers can provide a robust wide area search capability in a very...

  9. High resolution 3D dosimetry for microbeam radiation therapy using optical CT

    Optical Computed Tomography (CT) is a promising technique for dosimetry of Microbeam Radiation Therapy (MRT), providing high resolution 3D dose maps. Here different MRT irradiation geometries are visualised showing the potential of Optical CT as a tool for future MRT trials. The Peak-to-Valley dose ratio (PVDR) is calculated to be 7 at a depth of 3mm in the radiochromic dosimeter PRESAGE®. This is significantly lower than predicted values and possible reasons for this are discussed

  10. A Closed Form Solution to Segment 3D Motion Using Straight-line Optical Flow

    ZHANG Jing; SHI Fan-huai; MA Wen-juan; LIU Yun-cai

    2008-01-01

    A closed form solution to the problem of segmenting multiple 3D motion models was proposed fromstraight-line optical flow. It introduced the multibody line optical flow constraint (MLOFC), a polynomial equation relating motion models and line parameters. The motion models can be obtained analytically as the derivative of the MLOFC at the corresponding line measurement, without knowing the motion model associated with that line. Experiments on real and synthetic sequences were also presented.

  11. 3D micro-optical elements for generation of tightly focused vortex beams

    Balčytis Armandas; Hakobyan Davit; Gabalis Martynas; Žukauskas Albertas; Urbonas Darius; Malinauskas Mangirdas; Petruškevičius Raimondas; Brasselet Etienne; Juodkazis Saulius

    2015-01-01

    Orbital angular momentum carrying light beams are usedfor optical trapping and manipulation. This emerging trend provides new challenges involving device miniaturization for improved performance and enhanced functionality at the microscale. Here we discus a new fabrication method based on combining the additive 3D structuring capability laser photopolymerization and the substractive sub-wavelength resolution patterning of focused ion beam lithography to produce micro-optical elements capable ...

  12. Construction modification of data-projector for optical 3D measurement

    Pochmon, Michal; Pravdová, L.; Rössler, T.

    Ostrava: VŠB - TU, 2008 - (Fuxa, J.; Macura, P.; Halama, R.), s. 199-202 ISBN 978-80-248-1774-3. [Experimental Stress Analysis (EAN) 2008. International scientific conference /46./. Horní Bečva (CZ), 02.06.2008-05.06.2008] R&D Projects: GA MŠk(CZ) 1M06002 Institutional research plan: CEZ:AV0Z10100522 Keywords : data -projector * optical 3D measurement Subject RIV: BH - Optics, Masers, Lasers

  13. Fiber Optic 3-D Space Piezoelectric Accelerometer and its Antinoise Technology

    2001-01-01

    The mechanical structure of piezoelectric accelerometer is designed, and the operation equations on X-, Y-, and Z-axes are deduced. The test results of 3-D frequency response are given. Noise disturbances are effectively eliminated by using fiber optic transmission and synchronous detection.

  14. Automatic extraction of soft tissues from 3D MRI images of the head

    This paper presents an automatic extraction method of soft tissues from 3D MRI images of the head. A 3D region growing algorithm is used to extract soft tissues such as cerebrum, cerebellum and brain stem. Four information sources are used to control the 3D region growing. Model of each soft tissue has been constructed in advance and provides a 3D region growing space. Head skin area which is automatically extracted from input image provides an unsearchable area. Zero-crossing points are detected by using Laplacian operator, and by examining sign change between neighborhoods. They are used as a control condition in the 3D region growing process. Graylevels of voxels are also directly used to extract each tissue region as a control condition. Experimental results applied to 19 samples show that the method is successful. (author)

  15. 3D Particle image velocimetry test of inner flow in a double blade pump impeller

    Liu, Houlin; Wang, Kai; Yuan, Shouqi; Tan, Minggao; Wang, Yong; Ru, Weimin

    2012-05-01

    The double blade pump is widely used in sewage treatment industry, however, the research on the internal flow characteristics of the double blade pump with particle image velocimetry (PIV) technology is very little at present. To reveal inner flow characteristics in double blade pump impeller under off-design and design conditions, inner flows in a double blade pump impeller, whose specific speed is 111, are measured under the five off-design conditions and design condition by using 3D PIV test technology. In order to ensure the accuracy of the 3D PIV test, the external trigger synchronization system which makes use of fiber optic and equivalent calibration method are applied. The 3D PIV relative velocity synthesis procedure is compiled by using Visual C++ 2005. Then absolute velocity distribution and relative velocity distribution in the double blade pump impeller are obtained. Test results show that vortex exists in each condition, but the location, size and velocity of vortex core are different. Average absolute velocity value of impeller outlet increases at first, then decreases, and then increases again with increase of flow rate. Again average relative velocity values under 0.4, 0.8, and 1.2 design condition are higher than that under 1.0 design condition, while under 0.6 and 1.4 design condition it is lower. Under low flow rate conditions, radial vectors of absolute velocities at impeller outlet and blade inlet near the pump shaft decrease with increase of flow rate, while that of relative velocities at the suction side near the pump shaft decreases. Radial vectors of absolute velocities and relative velocities change slightly under the two large flow rate conditions. The research results can be applied to instruct the hydraulic optimization design of double blade pumps.

  16. Robust Adaptive Segmentation of 3D Medical Images with Level Sets

    Baillard, Caroline; Barillot, Christian; Bouthemy, Patrick

    2000-01-01

    This paper is concerned with the use of the Level Set formalism to segment anatomical structures in 3D medical images (ultrasound or magnetic resonance images). A closed 3D surface propagates towards the desired boundaries through the iterative evolution of a 4D implicit function. The major contribution of this work is the design of a robust evolution model based on adaptive parameters depending on the data. First the step size and the external propagation force factor, both usually predeterm...

  17. Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Guo, Jianya; Mei, Xi; Tang, Kun

    2012-01-01

    Background Traditional anthropometric studies of human face rely on manual measurements of simple features, which are labor intensive and lack of full comprehensive inference. Dense surface registration of three-dimensional (3D) human facial images holds great potential for high throughput quantitative analyses of complex facial traits. However there is a lack of automatic high density registration method for 3D faical images. Furthermore, current approaches of landmark recognition require fu...

  18. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Xing Zhao; Jing-jing Hu; Peng Zhang

    2009-01-01

    Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs) has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed...

  19. Dosimetric verification of complex radiotherapy with a 3D optically based dosimetry system - Dose painting and target tracking

    Skyt, Peter S. [Dept. of Medical Physics, Aarhus Univ./Aarhus Univ. Hospital, Aarhus (Denmark); Dept. of Physics and Astronomy, Aarhus Univ., Aarhus (Denmark)], e-mail: skyt@phys.au.dk; Petersen, Joergen B. B.; Yates, Esben S.; Muren, Ludvig P. [Dept. of Medical Physics, Aarhus Univ./Aarhus Univ. Hospital, Aarhus (Denmark); Poulsen, Per R.; Ravkilde, Thomas L. [Dept. of Oncology, Aarhus Univ. Hospital, Aarhus (Denmark); Balling, Peter [Dept. of Physics and Astronomy, Aarhus Univ., Aarhus (Denmark)

    2013-10-15

    Background: The increasing complexity of radiotherapy (RT) has motivated research into three-dimensional (3D) dosimetry. In this study we investigate the use of 3D dosimetry with polymerizing gels and optical computed tomography (optical CT) as a verification tool for complex RT: dose painting and target tracking. Materials and Methods: For the dose painting studies, two dosimeters were irradiated with a seven-field intensity modulated radiotherapy (IMRT) plan with and without dose prescription based on a hypoxia image dataset of a head and neck patient. In the tracking experiments, two dosimeters were irradiated with a volumetric modulated arc therapy (VMAT) plan with and without clinically measured prostate motion and a third with both motion and target tracking. To assess the performance, 3D gamma analyses were performed between measured and calculated stationary dose distributions. Results: Gamma pass-rates of 95.3 % and 97.3 % were achieved for the standard and dose-painted IMRT plans. Gamma pass-rates of 91.4 % and 54.4 % were obtained for the stationary and moving dosimeter, respectively, while tracking increased the pass-rate for the moving dosimeter to 90.4 %. Conclusions: This study has shown that the 3D dosimetry system can reproduce and thus verify complex dose distributions, also when influenced by motion.

  20. Dosimetric verification of complex radiotherapy with a 3D optically based dosimetry system - Dose painting and target tracking

    Background: The increasing complexity of radiotherapy (RT) has motivated research into three-dimensional (3D) dosimetry. In this study we investigate the use of 3D dosimetry with polymerizing gels and optical computed tomography (optical CT) as a verification tool for complex RT: dose painting and target tracking. Materials and Methods: For the dose painting studies, two dosimeters were irradiated with a seven-field intensity modulated radiotherapy (IMRT) plan with and without dose prescription based on a hypoxia image dataset of a head and neck patient. In the tracking experiments, two dosimeters were irradiated with a volumetric modulated arc therapy (VMAT) plan with and without clinically measured prostate motion and a third with both motion and target tracking. To assess the performance, 3D gamma analyses were performed between measured and calculated stationary dose distributions. Results: Gamma pass-rates of 95.3 % and 97.3 % were achieved for the standard and dose-painted IMRT plans. Gamma pass-rates of 91.4 % and 54.4 % were obtained for the stationary and moving dosimeter, respectively, while tracking increased the pass-rate for the moving dosimeter to 90.4 %. Conclusions: This study has shown that the 3D dosimetry system can reproduce and thus verify complex dose distributions, also when influenced by motion

  1. Fast Susceptibility-Weighted Imaging (SWI) with 3D Short-Axis Propeller (SAP)-EPI

    Holdsworth, Samantha J.; Yeom, Kristen W.; Moseley, Michael E.; Skare, S.

    2014-01-01

    Purpose Susceptibility-Weighted Imaging (SWI) in neuroimaging can be challenging due to long scan times of 3D Gradient Recalled Echo (GRE), while faster techniques such as 3D interleaved EPI (iEPI) are prone to motion artifacts. Here we outline and implement a 3D Short-Axis Propeller Echo-Planar Imaging (SAP-EPI) trajectory as a faster, motion-correctable approach for SWI. Methods Experiments were conducted on a 3T MRI system. 3D SAP-EPI, 3D iEPI, and 3D GRE SWI scans were acquired on two volunteers. Controlled motion experiments were conducted to test the motion-correction capability of 3D SAP-EPI. 3D SAP-EPI SWI data were acquired on two pediatric patients as a potential alternative to 2D GRE used clinically. Results 3D GRE images had a better target resolution (0.47 × 0.94 × 2mm, scan time = 5min), iEPI and SAP-EPI images (resolution = 0.94 × 0.94 × 2mm) were acquired in a faster scan time (1:52min) with twice the brain coverage. SAP-EPI showed motion-correction capability and some immunity to undersampling from rejected data. Conclusion While 3D SAP-EPI suffers from some geometric distortion, its short scan time and motion-correction capability suggest that SAP-EPI may be a useful alternative to GRE and iEPI for use in SWI, particularly in uncooperative patients. PMID:24956237

  2. Wide area 2D/3D imaging development, analysis and applications

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  3. The application of camera calibration in range-gated 3D imaging technology

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  4. 3D Image-Guided Automatic Pipette Positioning for Single Cell Experiments in vivo

    Brian Long; Lu Li; Ulf Knoblich; Hongkui Zeng; Hanchuan Peng

    2015-01-01

    We report a method to facilitate single cell, image-guided experiments including in vivo electrophysiology and electroporation. Our method combines 3D image data acquisition, visualization and on-line image analysis with precise control of physical probes such as electrophysiology microelectrodes in brain tissue in vivo. Adaptive pipette positioning provides a platform for future advances in automated, single cell in vivo experiments.

  5. MRI Sequence Images Compression Method Based on Improved 3D SPIHT%基于改进3D SPIHT的MRI序列图像压缩方法

    蒋行国; 李丹; 陈真诚

    2013-01-01

    目的 研究一种有效的MRI序列图像压缩方法.方法 以2组不同数量、不同层厚的MRI序列图像为例,针对3D SPIHT算法运算复杂度,在对D型、L型表项重复判断的不足上,提出了一种改进的3DSPIHT方法;同时,根据MRI序列图像的相关性特点,提出了分组编/解码的方法,结合3D小波变换和应用改进的3D SPIHT方法,实现了MRI序列图像压缩.结果 分组结合改进3D SPIHT方法与2DSPIHT,3D SPIHT相比,能够得到较好重构图像,同时,峰值信噪比(PSNR)提高了1~8 dB左右.结论 在相同码率下,分组结合改进3D SPIHT的方法提高了PSNR和图像恢复质量,可以更好地解决大量MRI序列图像存储与传输问题.%Objective To propose an effective MRI sequence image compression method for solving the storage and transmission problem of large amounts of MRI sequence images. Methods Aimed at alleviating the complexity of computation of 3D Set Partitioning in Hierarchical Trees( SPIHT) algorithm and the deficiency that D or L type table were judged repeatedly, an improved 3 D SPIHT method was presented and two groups of MRI sequence images with different numbers and slice thickness were taken as examples. At the same time, according to the correlation characteristics of MRI sequence images, a method that images were divided into groups and then coded/decoded was put forward in this paper. It combined with 3D wavelet transform and the improved 3D SPIHT method, the MRI sequence image compression was achieved. Results Comparing with the 2D SPIHT and 3D SPIHT methods, the grouping combined with the improved 3D SPIHT method could obtain better reconstructed images and Peak Signal Noise Ratio (PSNR) could be improved by 1 ~ 8 dB as well. Conclusion At the same bit rate, PSNR and image quality of recovery can be improved by the grouping combined with the improved 3D SPIHT method and the storage and transmission problem of large amounts of MRI sequence images can be solved.

  6. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  7. Performance of an improved first generation optical CT scanner for 3D dosimetry

    Performance analysis of a modified 3D dosimetry optical scanner based on the first generation optical CT scanner OCTOPUS is presented. The system consists of PRESAGE™ dosimeters, the modified 3D scanner, and a new developed in-house user control panel written in Labview program which provides more flexibility to optimize mechanical control and data acquisition technique. The total scanning time has been significantly reduced from initial 8 h to ∼2 h by using the modified scanner. The functional performance of the modified scanner has been evaluated in terms of the mechanical integrity uncertainty of the data acquisition process. Optical density distribution comparison between the modified scanner, OCTOPUS and the treatment plan system has been studied. It has been demonstrated that the agreement between the modified scanner and treatment plans is comparable with that between the OCTOPUS and treatment plans. (note)

  8. Fast iterative image reconstruction methods for fully 3D multispectral bioluminescence tomography

    We investigate fast iterative image reconstruction methods for fully 3D multispectral bioluminescence tomography for applications in small animal imaging. Our forward model uses a diffusion approximation for optically inhomogeneous tissue, which we solve using a finite element method (FEM). We examine two approaches to incorporating the forward model into the solution of the inverse problem. In a conventional direct calculation approach one computes the full forward model by repeated solution of the FEM problem, once for each potential source location. We describe an alternative on-the-fly approach where one does not explicitly solve for the full forward model. Instead, the solution to the forward problem is included implicitly in the formulation of the inverse problem, and the FEM problem is solved at each iteration for the current image estimate. We evaluate the convergence speeds of several representative iterative algorithms. We compare the computation cost of those two approaches, concluding that the on-the-fly approach can lead to substantial reductions in total cost when combined with a rapidly converging iterative algorithm

  9. 3D optical vortices generated by micro-optical elements and its novel applications

    BU J.; LIN J.; K. J. Moh; B. P. S. Ahluwalia; CHEN H. L.; PENG X.; NIU H. B.; YUAN X.C.

    2007-01-01

    In this paper we report on recent development in the areas of optical vortices generated by micro-optical elements and applications of optical vortices, including optical manipulation, radial polarization and secure free space optical communication

  10. Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy

    Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schrödel, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

    2014-01-01

    3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

  11. SEGMENTATION OF UAV-BASED IMAGES INCORPORATING 3D POINT CLOUD INFORMATION

    A. Vetrivel

    2015-03-01

    Full Text Available Numerous applications related to urban scene analysis demand automatic recognition of buildings and distinct sub-elements. For example, if LiDAR data is available, only 3D information could be leveraged for the segmentation. However, this poses several risks, for instance, the in-plane objects cannot be distinguished from their surroundings. On the other hand, if only image based segmentation is performed, the geometric features (e.g., normal orientation, planarity are not readily available. This renders the task of detecting the distinct sub-elements of the building with similar radiometric characteristic infeasible. In this paper the individual sub-elements of buildings are recognized through sub-segmentation of the building using geometric and radiometric characteristics jointly. 3D points generated from Unmanned Aerial Vehicle (UAV images are used for inferring the geometric characteristics of roofs and facades of the building. However, the image-based 3D points are noisy, error prone and often contain gaps. Hence the segmentation in 3D space is not appropriate. Therefore, we propose to perform segmentation in image space using geometric features from the 3D point cloud along with the radiometric features. The initial detection of buildings in 3D point cloud is followed by the segmentation in image space using the region growing approach by utilizing various radiometric and 3D point cloud features. The developed method was tested using two data sets obtained with UAV images with a ground resolution of around 1-2 cm. The developed method accurately segmented most of the building elements when compared to the plane-based segmentation using 3D point cloud alone.

  12. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  13. 3D multiple-point statistics simulation using 2D training images

    Comunian, A.; Renard, P.; Straubhaar, J.

    2012-03-01

    One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.

  14. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  15. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research

  16. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  17. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Ruisong Ye

    2012-07-01

    Full Text Available An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency domain. Experimental results show that the stego-image looks visually identical to the original host one and the secret image can be effectively extracted upon image processing attacks, which demonstrates strong robustness against a variety of attacks.

  18. Comparison of 3D Synthetic Aperture Imaging and Explososcan using Phantom Measurements

    Rasmussen, Morten Fischer; Férin, Guillaume; Dufait, Rémi;

    2012-01-01

    In this paper, initial 3D ultrasound measurements from a 1024 channel system are presented. Measurements of 3D Synthetic aperture imaging (SAI) and Explososcan are presented and compared. Explososcan is the ’gold standard’ for real-time 3D medical ultrasound imaging. SAI is compared to Explososcan...... by using tissue and wire phantom measurements. The measurements are carried out using a 1024 element 2D transducer and the 1024 channel experimental ultrasound scanner SARUS. To make a fair comparison, the two imaging techniques use the same number of active channels, the same number of emissions per...... frame, and they emit the same amount of energy per frame. The measurements were performed with parameters similar to standard cardiac imaging, with 256 emissions to image a volume spanning 90×90 and 150mm in depth. This results in a frame rate of 20 Hz. The number of active channels is set to 316 from...

  19. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  20. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Bieniosek, Matthew F. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, California 94305 (United States); Lee, Brian J. [Department of Mechanical Engineering, Stanford University, 440 Escondido Mall, Stanford, California 94305 (United States); Levin, Craig S., E-mail: cslevin@stanford.edu [Departments of Radiology, Physics, Bioengineering and Electrical Engineering, Stanford University, 300 Pasteur Dr., Stanford, California 94305-5128 (United States)

    2015-10-15

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  1. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  2. Effects of point configuration on the accuracy in 3D reconstruction from biplane images

    Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the 'cloud' of points) on reconstruction errors for one of these techniques developed in our laboratory. Five types of configurations (a ball, an elongated ellipsoid (cigar), flattened ball (pancake), flattened cigar, and a flattened ball with a single distant point) are used in the evaluations. For each shape, 100 random configurations were generated, with point coordinates chosen from Gaussian distributions having a covariance matrix corresponding to the desired shape. The 3D data were projected into the image planes using a known imaging geometry. Gaussian distributed errors were introduced in the x and y coordinates of these projected points. Gaussian distributed errors were also introduced into the gantry information used to calculate the initial imaging geometry. The imaging geometries and 3D positions were iteratively refined using the enhanced-Metz-Fencil technique. The image data were also used to evaluate the feasible R-t solution volume. The 3D errors between the calculated and true positions were determined. The effects of the shape of the configuration, the number of points, the initial geometry error, and the input image error were evaluated. The results for the number of points, initial geometry error, and image error are in agreement with previously reported results, i.e., increasing the number of points and reducing initial geometry and/or image error, improves the accuracy of the reconstructed data. The shape of the 3D configuration of points also affects the error of reconstructed 3D configuration; specifically, errors decrease as the 'volume' of the 3D configuration

  3. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  4. Wearable 3-D Photoacoustic Tomography for Functional Brain Imaging in Behaving Rats.

    Tang, Jianbo; Coleman, Jason E; Dai, Xianjin; Jiang, Huabei

    2016-01-01

    Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments revealed that the in-plane X-Y spatial resolutions were ~200 μm for each acoustic detection layer. The functional imaging capacity of 3D-wPAT was demonstrated by mapping the cerebral oxygen saturation via multi-wavelength irradiation in behaving hyperoxic rats. In addition, we demonstrated that 3D-wPAT could be used for monitoring sensory stimulus-evoked responses in behaving rats by measuring hemodynamic responses in the primary visual cortex during visual stimulation. Together, these results show the potential of 3D-wPAT for brain study in behaving rodents. PMID:27146026

  5. Flatbed-type 3D display systems using integral imaging method

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  6. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  7. Development of 2D, pseudo 3D and 3D x-ray imaging for early diagnosis of breast cancer and rheumatoid arthritis

    By using plane-wave x-rays with synchrotron radiation refraction-based x-ray medical imaging can be used to visualize soft tissue, as reported in this paper. This method comprises two-dimensional (2D) x-ray dark-field imaging (XDFI), the tomosynthesis of pseudo 3D (sliced) x-ray imaging by the adoption of XDFI and 3D x-ray imaging by utilizing a newly devised algorithm. We aim to make contribution to the early diagnosis of breast cancer, which is a major cancer among women, and rheumatoid arthritises which cannot be detected in its early stages. (author)

  8. The diagnostic value of 3D spiral CT imaging of cholangiopancreatic ducts on obstructive jaundice

    Linquan Wu; Xiangbao Yin; Qingshan Wang; Bohua Wu; Xiao Li; Huaqun Fu

    2011-01-01

    Objective: Computerized tomography (CT) plays an important role in the diagnosis of diseases of biliary tract. Recently, three dimensions (3D) spiral CT imaging has been used in surgical diseases gradually. This study was designed to evaluate the diagnostic value of 3D spiral CT imaging of cholangiopancreatic ducts on obstructive jaundice. Methods: Thirty patients with obstructive jaundice had received B-mode ultrasonography, CT, percutaneous transhepatic cholangiography (PTC) or endoscopic retrograde cholangiopancreatography (ERCP), and 3D spiral CT imaging of cholangiopancreatic ducts preoperatively. Then the diagnose accordance rate of these examinational methods were compared after operations. Results: The diagnose accordance rate of 3D spiral CT imaging of cholangiopancreatic ducts was higher than those of B-mode ultraso-nography, CT, or single PTC or ERCP, which showed clear images of bile duct tree and pathological changes. As to malignant obstructive jaundice, this examinational technique could clearly display the adjacent relationship between tumor and liver tissue, biliary ducts, blood vessels, and intrahepatic metastases. Conclusion: 3D spiral CT imaging of cholangiopancreatic ducts has significant value for obstructive diseases of biliary ducts, which provides effective evidence for the feasibility of tumor-resection and surgical options.

  9. Midsagittal plane extraction from brain images based on 3D SIFT

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°. (paper)

  10. Detection of Connective Tissue Disorders from 3D Aortic MR Images Using Independent Component Analysis

    Hansen, Michael Sass; Zhao, Fei; Zhang, Honghai;

    2006-01-01

    A computer-aided diagnosis (CAD) method is reported that allows the objective identification of subjects with connective tissue disorders from 3D aortic MR images using segmentation and independent component analysis (ICA). The first step to extend the model to 4D (3D + time) has also been taken....... ICA is an effective tool for connective tissue disease detection in the presence of sparse data using prior knowledge to order the components, and the components can be inspected visually. 3D+time MR image data sets acquired from 31 normal and connective tissue disorder subjects at end-diastole (R......-wave peak) and at 45\\$\\backslash\\$% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta. The CAD method distinguished between normal and connective tissue disorder subjects with a classification...

  11. BER Analysis Using Beat Probability Method of 3D Optical CDMA Networks with Double Balanced Detection

    Chih-Ta Yen

    2015-01-01

    Full Text Available This study proposes novel three-dimensional (3D matrices of wavelength/time/spatial code for code-division multiple-access (OCDMA networks, with a double balanced detection mechanism. We construct 3D carrier-hopping prime/modified prime (CHP/MP codes by extending a two-dimensional (2D CHP code integrated with a one-dimensional (1D MP code. The corresponding coder/decoder pairs were based on fiber Bragg gratings (FBGs and tunable optical delay lines integrated with splitters/combiners. System performance was enhanced by the low cross correlation properties of the 3D code designed to avoid the beat noise phenomenon. The CHP/MP code cardinality increased significantly compared to the CHP code under the same bit error rate (BER. The results indicate that the 3D code method can enhance system performance because both the beating terms and multiple-access interference (MAI were reduced by the double balanced detection mechanism. Additionally, the optical component can also be relaxed for high transmission scenery.

  12. Effects of point configuration on the accuracy in 3D reconstruction from biplane images

    Dmochowski, Jacek; Hoffmann, Kenneth R.; Singh, Vikas; Xu, Jinhui; Nazareth, Daryl P.

    2005-01-01

    Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the “cloud” of points) on reconstruction errors for one of these techniques developed in our laboratory. Five type...

  13. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    Khalid M. Hosny; Hafez, Mohamed A.

    2012-01-01

    An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add m...

  14. Automated Algorithm for Carotid Lumen Segmentation and 3D Reconstruction in B-mode images

    Jorge M. S. Pereira; João Manuel R. S. Tavares

    2011-01-01

    The B-mode image system is one of the most popular systems used in the medical area; however it imposes several difficulties in the image segmentation process due to low contrast and noise. Although these difficulties, this image mode is often used in the study and diagnosis of the carotid artery diseases.In this paper, it is described the a novel automated algorithm for carotid lumen segmentation and 3-D reconstruction in B- mode images.

  15. Learning Methods for Recovering 3D Human Pose from Monocular Images

    Agarwal, Ankur; Triggs, Bill

    2004-01-01

    We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We ev...

  16. Digital Image Analysis of Cells : Applications in 2D, 3D and Time

    Pinidiyaarachchi, Amalka

    2009-01-01

    Light microscopes are essential research tools in biology and medicine. Cell and tissue staining methods have improved immensely over the years and microscopes are now equipped with digital image acquisition capabilities. The image data produced require development of specialized analysis methods. This thesis presents digital image analysis methods for cell image data in 2D, 3D and time sequences. Stem cells have the capability to differentiate into specific cell types. The mechanism behind di...

  17. Synthesis of 3D Model of a Magnetic Field-Influenced Body from a Single Image

    Wang, Cuilan; Newman, Timothy; Gallagher, Dennis

    2006-01-01

    A method for recovery of a 3D model of a cloud-like structure that is in motion and deforming but approximately governed by magnetic field properties is described. The method allows recovery of the model from a single intensity image in which the structure's silhouette can be observed. The method exploits envelope theory and a magnetic field model. Given one intensity image and the segmented silhouette in the image, the method proceeds without human intervention to produce the 3D model. In addition to allowing 3D model synthesis, the method's capability to yield a very compact description offers further utility. Application of the method to several real-world images is demonstrated.

  18. Single-pixel 3D imaging with time-based depth resolution

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  19. 3D X-ray microscopy: image formation, tomography and instrumentation

    Selin, Mårten

    2016-01-01

    Tomography in soft X-ray microscopy is an emerging technique for obtaining quantitative 3D structural information about cells. One of its strengths, compared with other techniques, is that it can image intact cells in their near-native state at a few 10 nm’s resolution, without staining. However, the methods for reconstructing 3D-data rely on algorithms that assume projection data, which the images are generally not due to the imaging systems’ limited depth of focus. To bring out the full pot...

  20. Fully 3D PET image reconstruction with a 4D sinogram blurring kernel

    Tohme, Michel S.; Qi, Jinyi [California Univ., Davis, CA (United States). Dept. of Biomedical Engineering; Zhou, Jian

    2011-07-01

    Accurately modeling PET system response is essential for high-resolution image reconstruction. Traditionally, sinogram blurring effects are modeled as a 2D blur in each sinogram plane. Such 2D blurring kernel is insufficient for fully 3D PET data, which has four dimensions. In this paper, we implement a fully 3D PET image reconstruction using a 4D sinogram blurring kernel estimated from point source scans and perform phantom experiments to evaluate the improvements in image quality over methods with existing 2D blurring kernels. The results show that the proposed reconstruction method can achieve better spatial resolution and contrast recovery than existing methods. (orig.)

  1. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    Lee, SangYun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified including the mean refractive index, volume, cylinder, and effective radius of individual hairs. In addition, the effects of hydrogen peroxide on individual downy hairs were investigated.

  2. High-throughput analysis of horse sperms' 3D swimming patterns using computational on-chip imaging.

    Su, Ting-Wei; Choi, Inkyum; Feng, Jiawen; Huang, Kalvin; Ozcan, Aydogan

    2016-06-01

    Using a high-throughput optical tracking technique that is based on partially-coherent digital in-line holography, here we report a detailed analysis of the statistical behavior of horse sperms' three-dimensional (3D) swimming dynamics. This dual-color and dual-angle lensfree imaging platform enables us to track individual 3D trajectories of ∼1000 horse sperms at sub-micron level within a sample volume of ∼9μL at a frame rate of 143 frames per second (FPS) and collect thousands of sperm trajectories within a few hours for statistical analysis of their 3D dynamics. Using this high-throughput imaging platform, we recorded >17,000 horse sperm trajectories that can be grouped into six major categories: irregular, linear, planar, helical, ribbon, and hyperactivated, where the hyperactivated swimming patterns can be further divided into four sub-categories, namely hyper-progressive, hyper-planar, hyper-ribbon, and star-spin. The large spatio-temporal statistics that we collected with this 3D tracking platform revealed that irregular, planar, and ribbon trajectories are the dominant 3D swimming patterns observed in horse sperms, which altogether account for >97% of the trajectories that we imaged in plasma-free semen extender medium. Through our experiments we also found out that horse seminal plasma in general increases sperms' straightness in their 3D trajectories, enhancing the relative percentage of linear swimming patterns and suppressing planar swimming patterns, while barely affecting the overall percentage of ribbon patterns. PMID:26826909

  3. Characterizing and reducing crosstalk in printed anaglyph stereoscopic 3D images

    Woods, Andrew J.; Harris, Chris R.; Leggo, Dean B.; Rourke, Tegan M.

    2013-04-01

    The anaglyph three-dimensional (3D) method is a widely used technique for presenting stereoscopic 3D images. Its primary advantages are that it will work on any full-color display and only requires that the user view the anaglyph image using a pair of anaglyph 3D glasses with usually one lens tinted red and the other lens tinted cyan. A common image quality problem of anaglyph 3D images is high levels of crosstalk-the incomplete isolation of the left and right image channels such that each eye sees a "ghost" of the opposite perspective view. In printed anaglyph images, the crosstalk levels are often very high-much higher than when anaglyph images are presented on emissive displays. The sources of crosstalk in printed anaglyph images are described and a simulation model is developed that allows the amount of printed anaglyph crosstalk to be estimated based on the spectral characteristics of the light source, paper, ink set, and anaglyph glasses. The model is validated using a visual crosstalk ranking test, which indicates good agreement. The model is then used to consider scenarios for the reduction of crosstalk in printed anaglyph systems and finds a number of options that are likely to reduce crosstalk considerably.

  4. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  5. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  6. Fusion of laser and image sensory data for 3-D modeling of the free navigation space

    Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

    1994-01-01

    A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

  7. Using the 3D-SMS for finding starting configurations in imaging systems with freeform surfaces

    Satzer, Britta; Richter, Undine; Lippmann, Uwe; Metzner, Gerburg S.; Notni, Gunther; Gross, Herbert

    2015-09-01

    As the scientific field of the freeform optics is newly developing, there is only a small number of approved starting systems for the imaging lens design. We investigate the possibility to generate starting configurations of freeform lenses with the Simultaneous Multiple Surface (SMS) method. Surface fit and transfer to the ray tracing program are discussed in detail. Based on specific examples without rotational symmetry, we analyze the potential of such starting systems. The tested systems evolve from Scheimpflug configurations or have arbitrarily tilted image planes. The optimization behavior of the starting systems retrieved from the 3D-SMS is compared to classical starting configurations, like an aspheric lens. Therefore we evaluate the root mean square (RMS) spot radius before and after the optimization as well as the speed of convergence. In result the performance of the starting configurations is superior. The mean RMS spot diameter is reduced about up to 17.6 % in comparison to an aspheric starting configuration and about up to 28 % for a simple plane plate.

  8. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  9. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  10. 3-D MRI/CT fusion imaging of the lumbar spine

    The objective was to demonstrate the feasibility of MRI/CT fusion in demonstrating lumbar nerve root compromise. We combined 3-dimensional (3-D) computed tomography (CT) imaging of bone with 3-D magnetic resonance imaging (MRI) of neural architecture (cauda equina and nerve roots) for two patients using VirtualPlace software. Although the pathological condition of nerve roots could not be assessed using MRI, myelography or CT myelography, 3-D MRI/CT fusion imaging enabled unambiguous, 3-D confirmation of the pathological state and courses of nerve roots, both inside and outside the foraminal arch, as well as thickening of the ligamentum flavum and the locations, forms and numbers of dorsal root ganglia. Positional relationships between intervertebral discs or bony spurs and nerve roots could also be depicted. Use of 3-D MRI/CT fusion imaging for the lumbar vertebral region successfully revealed the relationship between bone construction (bones, intervertebral joints, and intervertebral disks) and neural architecture (cauda equina and nerve roots) on a single film, three-dimensionally and in color. Such images may be useful in elucidating complex neurological conditions such as degenerative lumbar scoliosis(DLS), as well as in diagnosis and the planning of minimally invasive surgery. (orig.)

  11. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  12. 3D stereotaxis for epileptic foci through integrating MR imaging with neurological electrophysiology data

    Objective: To improve the accuracy of the epilepsy diagnoses by integrating MR image from PACS with data from neurological electrophysiology. The integration is also very important for transmiting diagnostic information to 3D TPS of radiotherapy. Methods: The electroencephalogram was redisplayed by EEG workstation, while MR image was reconstructed by Brainvoyager software. 3D model of patient brain was built up by combining reconstructed images with electroencephalogram data in Base 2000. 30 epileptic patients (18 males and 12 females) with their age ranged from 12 to 54 years were confirmed by using the integrated MR images and the data from neurological electrophysiology and their 3D stereolocating. Results: The corresponding data in 3D model could show the real situation of patients' brain and visually locate the precise position of the focus. The suddessful rate of 3D guided operation was greatly improved, and the number of epileptic onset was markedly decreased. The epilepsy was stopped for 6 months in 8 of the 30 patients. Conclusion: The integration of MR image and information of neurological electrophysiology can improve the diagnostic level for epilepsy, and it is crucial for imp roving the successful rate of manipulations and the epilepsy analysis. (authors)

  13. 3D printed sensing patches with embedded polymer optical fibre Bragg gratings

    Zubel, Michal G.; Sugden, Kate; Saez-Rodriguez, D.; Nielsen, K.; Bang, O.

    2016-05-01

    The first demonstration of a polymer optical fibre Bragg grating (POFBG) embedded in a 3-D printed structure is reported. Its cyclic strain performance and temperature characteristics are examined and discussed. The sensing patch has a repeatable strain sensitivity of 0.38 pm/μepsilon. Its temperature behaviour is unstable, with temperature sensitivity values varying between 30-40 pm/°C.

  14. Measurement of abrasion of artificial cotyles using 3D optical scanning topography

    Mandát, Dušan; Nožka, Libor; Hrabovský, Miroslav; Bartoněk, L.

    Zagreb: Croatian Society of Mechanics, 2004 - (Jecic, S.; Semenski, D.), s. 92-93 ISBN 953-96243-6-3. [DANUBIA-ADRIA Symposium on Experimental Methods in Solid Mechanics /21./. Brijuni - Pula (HR), 29.09.2004-02.10.2004] R&D Projects: GA MŠk LN00A015 Grant ostatní: GA-(CZ) FRVŠ48/2004 Keywords : profilometry * 3D topography * cotyle * VRML language Subject RIV: BH - Optics, Masers, Lasers

  15. Comparison of 3-D Synthetic Aperture Phased-Array Ultrasound Imaging and Parallel Beamforming

    Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-01-01

    This paper demonstrates that synthetic apertureimaging (SAI) can be used to achieve real-time 3-D ultra-sound phased-array imaging. It investigates whether SAI in-creases the image quality compared with the parallel beam-forming (PB) technique for real-time 3-D imaging. Data areobtained using both...... simulations and measurements with anultrasound research scanner and a commercially available 3.5-MHz 1024-element 2-D transducer array. To limit the probecable thickness, 256 active elements are used in transmit andreceive for both techniques. The two imaging techniques weredesigned for cardiac imaging, which...... requires sequences de-signed for imaging down to 15cm of depth and a frame rateof at least 20Hz. The imaging quality of the two techniquesis investigated through simulations as a function of depth andangle. SAI improved the full-width at half-maximum (FWHM) at low steering angles by 35%, and the 20-d...

  16. Comparison of S3D Display Technology on Image Quality and Viewing Experiences: Active-Shutter 3d TV vs. Passive-Polarized 3DTV

    Yu-Chi Tai, PhD

    2014-05-01

    Full Text Available Background: Stereoscopic 3D TV systems convey depth perception to the viewer by delivering to each eye separately filtered images that represent two slightly different perspectives. Currently two primary technologies are used in S3D televisions: Active shutter systems, which use alternate frame sequencing to deliver a full-frame image to one eye at a time at a fast refresh rate, and Passive polarized systems, which superimpose the two half-frame left-eye and right-eye images at the same time through different polarizing filters. Methods: We compare visual performance in discerning details and perceiving depth, as well as the comfort and perceived display quality in viewing an S3D movie. Results: Our results show that, in presenting details of small targets and in showing low-contrast stimuli, the Active system was significantly better than the Passive in 2D mode, but there was no significant difference between them in 3D mode. Subjects performed better on Passive than Active in 3D mode on a task requiring small vergence changes and quick re-acquisition of stereopsis – a skill related to vergence efficiency while viewing S3D displays. When viewing movies in 3D mode, there was no difference in symptoms of discomfort between Active and Passive systems. When the two systems were put side by side with selected 3D-movie scenes, all of the subjective measures of perceived display quality in 3D mode favored the Passive system, and 10 of 14 comparisons were statistically significant. The Passive system was rated significantly better for sense of immersion, motion smoothness, clarity, color, and 5 categories related to the glasses. Conclusion: Overall, participants felt that it was easier to look at the Passive system for a longer period than the Active system, and the Passive display was selected as the preferred display by 75% (p = 0.0000211 of the subjects.

  17. Land surface temperature from INSAT-3D imager data: Retrieval and assimilation in NWP model

    Singh, Randhir; Singh, Charu; Ojha, Satya P.; Kumar, A. Senthil; Kishtawal, C. M.; Kumar, A. S. Kiran

    2016-06-01

    A new algorithm is developed for retrieving the land surface temperature (LST) from the imager radiance observations on board geostationary operational Indian National Satellite (INSAT-3D). The algorithm is developed using the two thermal infrared channels (TIR1 10.3-11.3 µm and TIR2 11.5-12.5 µm) via genetic algorithm (GA). The transfer function that relates LST and thermal radiances is developed using radiative transfer model simulated database. The developed algorithm has been applied on the INSAT-3D observed radiances, and LST retrieved from the developed algorithm has been validated with Moderate Resolution Imaging Spectroradiometer land surface temperature (LST) product. The developed algorithm demonstrates a good accuracy, without significant bias and standard deviations of 1.78 K and 1.41 K during daytime and nighttime, respectively. The newly proposed algorithm performs better than the operational algorithm used for LST retrieval from INSAT-3D satellite. Further, a set of data assimilation experiments is conducted with the Weather Research and Forecasting (WRF) model to assess the impact of INSAT-3D LST on model forecast skill over the Indian region. The assimilation experiments demonstrated a positive impact of the assimilated INSAT-3D LST, particularly on the lower tropospheric temperature and moisture forecasts. The temperature and moisture forecast errors are reduced (as large as 8-10%) with the assimilation of INSAT-3D LST, when compared to forecasts that were obtained without the assimilation of INSAT-3D LST. Results of the additional experiments of comparative performance of two LST products, retrieved from operational and newly proposed algorithms, indicate that the impact of INSAT-3D LST retrieved using newly proposed algorithm is significantly larger compared to the impact of INSAT-3D LST retrieved using operational algorithm.

  18. 3D laser inspection of fuel assembly grid spacers for nuclear reactors based on diffractive optical elements

    Finogenov, L. V.; Lemeshko, Yu A.; Zav'yalov, P. S.; Chugui, Yu V.

    2007-06-01

    Ensuring the safety and high operation reliability of nuclear reactors takes 100% inspection of geometrical parameters of fuel assemblies, which include the grid spacers performed as a cellular structure with fuel elements. The required grid spacer geometry of assembly in the transverse and longitudinal cross sections is extremely important for maintaining the necessary heat regime. A universal method for 3D grid spacer inspection using a diffractive optical element (DOE), which generates as the structural illumination a multiple-ring pattern on the inner surface of a grid spacer cell, is investigated. Using some DOEs one can inspect the nomenclature of all produced grids. A special objective has been developed for forming the inner surface cell image. The problems of diffractive elements synthesis, projecting optics calculation, adjusting methods as well as calibration of the experimental measuring system are considered. The algorithms for image processing for different constructive elements of grids (cell, channel hole, outer grid spacer rim) and the experimental results are presented.

  19. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T2-weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  20. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  1. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  2. Azimuth–opening angle domain imaging in 3D Gaussian beam depth migration

    Common-image gathers indexed by opening angle and azimuth at imaging points in 3D situations are the key inputs for amplitude-variation-with-angle and velocity analysis by tomography. The Gaussian beam depth migration, propagating each ray by a Gaussian beam form and summing the contributions from all the individual beams to produce the wavefield, can overcome the multipath problem, image steep reflectors and, even more important, provide a convenient and efficient strategy to extract azimuth–opening angle domain common-image gathers (ADCIGs) in 3D seismic imaging. We present a method for computing azimuth and opening angle at imaging points to output 3D ADCIGs by computing the source and receiver wavefield direction vectors which are restricted in the effective region of the corresponding Gaussian beams. In this paper, the basic principle of Gaussian beam migration (GBM) is briefly introduced; the technology and strategy to yield ADCIGs by GBM are analyzed. Numerical tests and field data application demonstrate that the azimuth–opening angle domain imaging method in 3D Gaussian beam depth migration is effective. (paper)

  3. Audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI

    Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.

    2014-03-01

    The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.

  4. Comparison of S3D Display Technology on Image Quality and Viewing Experiences: Active-Shutter 3d TV vs. Passive-Polarized 3DTV

    Yu-Chi Tai, PhD; Leigh Gongaware, BS; Andrew Reder, BS; John Hayes, PhD; James Sheedy, OD, PhD

    2014-01-01

    Background: Stereoscopic 3D TV systems convey depth perception to the viewer by delivering to each eye separately filtered images that represent two slightly different perspectives. Currently two primary technologies are used in S3D televisions: Active shutter systems, which use alternate frame sequencing to deliver a full-frame image to one eye at a time at a fast refresh rate, and Passive polarized systems, which superimpose the two half-frame left-eye and right-eye images at th...

  5. Improving Segmentation of 3D Retina Layers Based on Graph Theory Approach for Low Quality OCT Images

    Stankiewicz Agnieszka

    2016-06-01

    Full Text Available This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT. Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.

  6. A 3-D tomographic trajectory retrieval for the air-borne limb-imager GLORIA

    J. Ungermann

    2011-06-01

    Full Text Available Infrared limb sounding from aircraft can provide 2-D curtains of multiple trace gas species. However, conventional limb sounders view perpendicular to the aircraft axis and are unable to resolve the observed airmass along their line-of-sight. GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere is a new remote sensing instrument able to adjust its horizontal view angle with respect to the aircraft flight direction from 45° to 135°. This will allow for tomographic measurements of mesoscale structures for a wide variety of atmospheric constituents.

    Many flights of the GLORIA instrument will not follow closed curves that allow measuring an airmass from all directions. Consequently, it is examined by means of simulations, what results can be expected from tomographic evaluation of measurements made during a straight flight. It is demonstrated that the achievable resolution and stability is enhanced compared to conventional retrievals. In a second step, it is shown that the incorporation of channels exhibiting different optical depth can greatly enhance the 3-D retrieval quality enabling the exploitation of previously unused spectral samples.

    A second problem for tomographic retrievals is that advection, which can be neglected for conventional retrievals, plays an important role for the time-scales involved in a tomographic measurement flight. This paper presents a method to diagnose the effect of a time-varying atmosphere on a 3-D retrieval and demonstrates an effective way to compensate for effects of advection by incorporating wind-fields from meteorological datasets as a priori information.

  7. Construction of 3D Arrays of Cylindrically Hierarchical Structures with ZnO Nanorods Hydrothermally Synthesized on Optical Fiber Cores

    Weixuan Jing

    2014-01-01

    Full Text Available With ZnO nanorods hydrothermally synthesized on manually assembled arrays of optical fiber cores, 3D arrays of ZnO nanorod-based cylindrically hierarchical structures with nominal pitch 250 μm or 375 μm were constructed. Based on micrographs of scanning electron microscopy and image processing operators of MATLAB software, the 3D arrays of cylindrically hierarchical structures were quantitatively characterized. The values of the actual diameters, the actual pitches, and the parallelism errors suggest that the process capability of the manual assembling is sufficient and the quality of the 3D arrays of cylindrically hierarchical structures is acceptable. The values of the characteristic parameters such as roughness, skewness, kurtosis, correlation length, and power spectrum density show that the surface morphologies of the cylindrically hierarchical structures not only were affected significantly by Zn2+ concentration of the growth solution but also were anisotropic due to different curvature radii of the optical fiber core at side and front view.

  8. Informatics in radiology: Intuitive user interface for 3D image manipulation using augmented reality and a smartphone as a remote control.

    Nakata, Norio; Suzuki, Naoki; Hattori, Asaki; Hirai, Naoya; Miyamoto, Yukio; Fukuda, Kunihiko

    2012-01-01

    Although widely used as a pointing device on personal computers (PCs), the mouse was originally designed for control of two-dimensional (2D) cursor movement and is not suited to complex three-dimensional (3D) image manipulation. Augmented reality (AR) is a field of computer science that involves combining the physical world and an interactive 3D virtual world; it represents a new 3D user interface (UI) paradigm. A system for 3D and four-dimensional (4D) image manipulation has been developed that uses optical tracking AR integrated with a smartphone remote control. The smartphone is placed in a hard case (jacket) with a 2D printed fiducial marker for AR on the back. It is connected to a conventional PC with an embedded Web camera by means of WiFi. The touch screen UI of the smartphone is then used as a remote control for 3D and 4D image manipulation. Using this system, the radiologist can easily manipulate 3D and 4D images from computed tomography and magnetic resonance imaging in an AR environment with high-quality image resolution. Pilot assessment of this system suggests that radiologists will be able to manipulate 3D and 4D images in the reading room in the near future. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.324115086/-/DC1. PMID:22556316

  9. CMOS array of photodiodes with electronic processing for 3D optical reconstruction

    Hornero, Gemma; Montane, Enric; Chapinal, Genis; Moreno, Mauricio; Herms, Atila

    2001-04-01

    It is well known that laser time-of-flight (TOF) and optical triangulation are the most useful optical techniques for distance measurements. The first one is more suitable for large distances, since for short range of distances high modulation frequencies of laser diodes (©200-500MHz) are needed. For these ranges, optical triangulation is simpler, as it is only necessary to read the projection of the laser point over a linear optical sensor without any laser modulation. Laser triangulation is based on the rotation of the object. This motion shifts the projected point over the linear sensor, resulting on 3D information, by means of the whole readout of the linear sensor in each angle position. On the other hand, a hybrid method of triangulation and TOF can be implemented. In this case, a synchronized scanning of a laser beam over the object results in different arrival times of light to each pixel. The 3D information is carried by these delays. Only a single readout of the linear sensor is needed. In this work we present the design of two different linear arrays of photodiodes in CMOS technology, the first one based on the Optical triangulation measurement and the second one based in this hybrid method (TFO). In contrast to PSD (Position Sensitive Device) and CCDs, CMOS technology can include, on the same chip, photodiodes, control and processing electronics, that in the other cases should be implemented with external microcontrollers.

  10. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  11. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  12. Full data utilization in PVI [positron volume imaging] using the 3D radon transform

    An algorithm is described for three-dimensional (3D) image reconstruction in positron volume imaging (PVI) using the inversion of the 3D radon transform (RT) for a truncated cylindrical detector geometry. This single-pass reconstruction image has better statistical noise properties than images formed by RT inversion from complete XT projections, but only for some detector geometries is it significantly better. Monte Carlo simulations were used to study the statistical noise in images reconstructed using the new algorithm. The inherent difference in the axial versus the transaxial statistical noise in images reconstructed from truncated detectors is noted and is found to increase by including oblique events with this new algorithm. (author)

  13. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  14. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  15. A comparison of 2D and 3D digital image correlation for a membrane under inflation

    Murienne, Barbara J.; Nguyen, Thao D.

    2016-02-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions.

  16. Measurement of facial soft tissues thickness using 3D computed tomographic images

    To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. There were no statistically significant differences between the direct measurements and those using the 3D images(p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissue thickness more easily in forensic science and anthropology

  17. 3D nonrigid medical image registration using a new information theoretic measure

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  18. 3D nonrigid medical image registration using a new information theoretic measure

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen–Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy. (paper)

  19. 2D and 3D visualization methods of endoscopic panoramic bladder images

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  20. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..