WorldWideScience

Sample records for 3d optical imaging

  1. 3D integral imaging with optical processing

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  2. Multiplane 3D superresolution optical fluctuation imaging

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  3. Handbook of 3D machine vision optical metrology and imaging

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  4. Progresses in 3D integral imaging with optical processing

    Martinez-Corral, Manuel; Martinez-Cuenca, Raul; Saavedra, Genaro; Navarro, Hector; Pons, Amparo [Department of Optics. University of Valencia. Calle Doctor Moliner 50, E46 100, Burjassot (Spain); Javidi, Bahram [Electrical and Computer Engineering Department, University of Connecticut, Storrs, CT 06269-1157 (United States)], E-mail: manuel.martinez@uv.es

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  5. Optical-CT imaging of complex 3D dose distributions

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  6. Confocal Image 3D Surface Measurement with Optical Fiber Plate

    WANG Zhao; ZHU Sheng-cheng; LI Bing; TAN Yu-shan

    2004-01-01

    A whole-field 3D surface measurement system for semiconductor wafer inspection is described.The system consists of an optical fiber plate,which can split the light beam into N2 subbeams to realize the whole-field inspection.A special prism is used to separate the illumination light and signal light.This setup is characterized by high precision,high speed and simple structure.

  7. Joint Applied Optics and Chinese Optics Letters Feature Introduction: Digital Holography and 3D Imaging

    Ting-Chung Poon; Changhe Zhou; Toyohiko Yatagai; Byoungho Lee; Hongchen Zhai

    2011-01-01

    This feature issue is the fifth installment on digital holography since its inception four years ago.The last four issues have been published after the conclusion of each Topical Meeting "Digital Holography and 3D imaging (DH)." However,this feature issue includes a new key feature-Joint Applied Optics and Chinese Optics Letters Feature Issue.The DH Topical Meeting is the world's premier forum for disseminating the science and technology geared towards digital holography and 3D information processing.Since the meeting's inception in 2007,it has steadily and healthily grown to 130 presentations this year,held in Tokyo,Japan,May 2011.

  8. 3D photoacoustic imaging

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  9. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  10. Building 3D aerial image in photoresist with reconstructed mask image acquired with optical microscope

    Chou, C. S.; Tang, Y. P.; Chu, F. S.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2012-03-01

    Calibration of mask images on wafer becomes more important as features shrink. Two major types of metrology have been commonly adopted. One is to measure the mask image with scanning electron microscope (SEM) to obtain the contours on mask and then simulate the wafer image with optical simulator. The other is to use an optical imaging tool Aerial Image Measurement System (AIMSTM) to emulate the image on wafer. However, the SEM method is indirect. It just gathers planar contours on a mask with no consideration of optical characteristics such as 3D topography structures. Hence, the image on wafer is not predicted precisely. Though the AIMSTM method can be used to directly measure the intensity at the near field of a mask but the image measured this way is not quite the same as that on the wafer due to reflections and refractions in the films on wafer. Here, a new approach is proposed to emulate the image on wafer more precisely. The behavior of plane waves with different oblique angles is well known inside and between planar film stacks. In an optical microscope imaging system, plane waves can be extracted from the pupil plane with a coherent point source of illumination. Once plane waves with a specific coherent illumination are analyzed, the partially coherent component of waves could be reconstructed with a proper transfer function, which includes lens aberration, polarization, reflection and refraction in films. It is a new method that we can transfer near light field of a mask into an image on wafer without the disadvantages of indirect SEM measurement such as neglecting effects of mask topography, reflections and refractions in the wafer film stacks. Furthermore, with this precise latent image, a separated resist model also becomes more achievable.

  11. Deformation analysis of 3D tagged cardiac images using an optical flow method

    Gorman Robert C

    2010-03-01

    Full Text Available Abstract Background This study proposes and validates a method of measuring 3D strain in myocardium using a 3D Cardiovascular Magnetic Resonance (CMR tissue-tagging sequence and a 3D optical flow method (OFM. Methods Initially, a 3D tag MR sequence was developed and the parameters of the sequence and 3D OFM were optimized using phantom images with simulated deformation. This method then was validated in-vivo and utilized to quantify normal sheep left ventricular functions. Results Optimizing imaging and OFM parameters in the phantom study produced sub-pixel root-mean square error (RMS between the estimated and known displacements in the x (RMSx = 0.62 pixels (0.43 mm, y (RMSy = 0.64 pixels (0.45 mm and z (RMSz = 0.68 pixels (1 mm direction, respectively. In-vivo validation demonstrated excellent correlation between the displacement measured by manually tracking tag intersections and that generated by 3D OFM (R ≥ 0.98. Technique performance was maintained even with 20% Gaussian noise added to the phantom images. Furthermore, 3D tracking of 3D cardiac motions resulted in a 51% decrease in in-plane tracking error as compared to 2D tracking. The in-vivo function studies showed that maximum wall thickening was greatest in the lateral wall, and increased from both apex and base towards the mid-ventricular region. Regional deformation patterns are in agreement with previous studies on LV function. Conclusion A novel method was developed to measure 3D LV wall deformation rapidly with high in-plane and through-plane resolution from one 3D cine acquisition.

  12. Single Camera 3-D Coordinate Measuring System Based on Optical Probe Imaging

    2001-01-01

    A new vision coordinate measuring system——single camera 3-D coordinate measuring system based on optical probe imaging is presented. A new idea in vision coordinate measurement is proposed. A linear model is deduced which can distinguish six freedom degrees of optical probe to realize coordinate measurement of the object surface. The effects of some factors on the resolution of the system are analyzed. The simulating experiments have shown that the system model is available.

  13. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  14. Development of scanning laser sensor for underwater 3D imaging with the coaxial optics

    Ochimizu, Hideaki; Imaki, Masaharu; Kameyama, Shumpei; Saito, Takashi; Ishibashi, Shoujirou; Yoshida, Hiroshi

    2014-06-01

    We have developed the scanning laser sensor for underwater 3-D imaging which has the wide scanning angle of 120º (Horizontal) x 30º (Vertical) with the compact size of 25 cm diameter and 60 cm long. Our system has a dome lens and a coaxial optics to realize both the wide scanning angle and the compactness. The system also has the feature in the sensitivity time control (STC) circuit, in which the receiving gain is increased according to the time of flight. The STC circuit contributes to detect a small signal by suppressing the unwanted signals backscattered by marine snows. We demonstrated the system performance in the pool, and confirmed the 3-D imaging with the distance of 20 m. Furthermore, the system was mounted on the autonomous underwater vehicle (AUV), and demonstrated the seafloor mapping at the depth of 100 m in the ocean.

  15. Pico-projector-based optical sectioning microscopy for 3D chlorophyll fluorescence imaging of mesophyll cells

    Chen, Szu-Yu; Hsu, Yu John; Yeh, Chia-Hua; Chen, S.-Wei; Chung, Chien-Han

    2015-03-01

    A pico-projector-based optical sectioning microscope (POSM) was constructed using a pico-projector to generate structured illumination patterns. A net rate of 5.8 × 106 pixel/s and sub-micron spatial resolution in three-dimensions (3D) were achieved. Based on the pico-projector’s flexibility in pattern generation, the characteristics of POSM with different modulation periods and at different imaging depths were measured and discussed. With the application of different modulation periods, 3D chlorophyll fluorescence imaging of mesophyll cells was carried out in freshly plucked leaves of four species without sectioning or staining. For each leaf, an average penetration depth of 120 μm was achieved. Increasing the modulation period along with the increment of imaging depth, optical sectioning images can be obtained with a compromise between the axial resolution and signal-to-noise ratio. After ∼30 min imaging on the same area, photodamage was hardly observed. Taking the advantages of high speed and low damages of POSM, the investigation of the dynamic fluorescence responses to temperature changes was performed under three different treatment temperatures. The three embedded blue, green and red light-emitting diode light sources were applied to observe the responses of the leaves with different wavelength excitation.

  16. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  17. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  18. A 3D approach to reconstruct continuous optical images using lidar and MODIS

    HuaGuo; Huang; Jun; Lian

    2015-01-01

    Background: Monitoring forest health and biomass for changes over time in the global environment requires the provision of continuous satellite images. However, optical images of land surfaces are generally contaminated when clouds are present or rain occurs.Methods: To estimate the actual reflectance of land surfaces masked by clouds and potential rain, 3D simulations by the RAPID radiative transfer model were proposed and conducted on a forest farm dominated by birch and larch in Genhe City, Da Xing’An Ling Mountain in Inner Mongolia, China. The canopy height model(CHM) from lidar data were used to extract individual tree structures(location, height, crown width). Field measurements related tree height to diameter of breast height(DBH), lowest branch height and leaf area index(LAI). Series of Landsat images were used to classify tree species and land cover. MODIS LAI products were used to estimate the LAI of individual trees. Combining all these input variables to drive RAPID, high-resolution optical remote sensing images were simulated and validated with available satellite images.Results: Evaluations on spatial texture, spectral values and directional reflectance were conducted to show comparable results.Conclusions: The study provides a proof-of-concept approach to link lidar and MODIS data in the parameterization of RAPID models for high temporal and spatial resolutions of image reconstruction in forest dominated areas.

  19. A 3D approach to reconstruct continuous optical images using lidar and MODIS

    HuaGuo Huang

    2015-06-01

    Full Text Available Background Monitoring forest health and biomass for changes over time in the global environment requires the provision of continuous satellite images. However, optical images of land surfaces are generally contaminated when clouds are present or rain occurs. Methods To estimate the actual reflectance of land surfaces masked by clouds and potential rain, 3D simulations by the RAPID radiative transfer model were proposed and conducted on a forest farm dominated by birch and larch in Genhe City, DaXing’AnLing Mountain in Inner Mongolia, China. The canopy height model (CHM from lidar data were used to extract individual tree structures (location, height, crown width. Field measurements related tree height to diameter of breast height (DBH, lowest branch height and leaf area index (LAI. Series of Landsat images were used to classify tree species and land cover. MODIS LAI products were used to estimate the LAI of individual trees. Combining all these input variables to drive RAPID, high-resolution optical remote sensing images were simulated and validated with available satellite images. Results Evaluations on spatial texture, spectral values and directional reflectance were conducted to show comparable results. Conclusions The study provides a proof-of-concept approach to link lidar and MODIS data in the parameterization of RAPID models for high temporal and spatial resolutions of image reconstruction in forest dominated areas.

  20. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    Paoli Alessandro

    2011-02-01

    Full Text Available Abstract Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast and preoperative (radiographic template models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology.

  1. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  2. Analytical models of icosahedral shells for 3D optical imaging of viruses

    Jafarpour, Aliakbar

    2014-01-01

    A modulated icosahedral shell with an inclusion is a concise description of many viruses, including recently-discovered large double-stranded DNA ones. Many X-ray scattering patterns of such viruses show major polygonal fringes, which can be reproduced in image reconstruction with a homogeneous icosahedral shell. A key question regarding a low-resolution reconstruction is how to introduce further changes to the 3D profile in an efficient way with only a few parameters. Here, we derive and compile different analytical models of such an object with consideration of practical optical setups and typical structures of such viruses. The benefits of such models include 1) inherent filtering and suppressing different numerical errors of a discrete grid, 2) providing a concise and meaningful set of descriptors for feature extraction in high-throughput classification/sorting and higher-resolution cumulative reconstructions, 3) disentangling (physical) resolution from (numerical) discretization step and having a vector ...

  3. Editorial: 3DIM-DS 2015: Optical image processing in the context of 3D imaging, metrology, and data security

    Alfalou, Ayman

    2017-02-01

    Following the first International Symposium on 3D Imaging, Metrology, and Data Security (3DIM-DS) held in Shenzhen during september 2015, this special issue gathers a series of articles dealing with the main topics discussed during this symposium. These topics highlighted the importance of studying complex data treatment systems and intensive calculations designed for high dimensional imaging and metrology for which high image quality and high transmission speed become critical issues in a number of technological applications. A second purpose was to celebrate the International Year of Light by emphasizing the important role of optics in actual information processing systems.

  4. 3D Curvelet-Based Segmentation and Quantification of Drusen in Optical Coherence Tomography Images

    M. Esmaeili

    2017-01-01

    Full Text Available Spectral-Domain Optical Coherence Tomography (SD-OCT is a widely used interferometric diagnostic technique in ophthalmology that provides novel in vivo information of depth-resolved inner and outer retinal structures. This imaging modality can assist clinicians in monitoring the progression of Age-related Macular Degeneration (AMD by providing high-resolution visualization of drusen. Quantitative tools for assessing drusen volume that are indicative of AMD progression may lead to appropriate metrics for selecting treatment protocols. To address this need, a fully automated algorithm was developed to segment drusen area and volume from SD-OCT images. The proposed algorithm consists of three parts: (1 preprocessing, which includes creating binary mask and removing possible highly reflective posterior hyaloid that is used in accurate detection of inner segment/outer segment (IS/OS junction layer and Bruch’s membrane (BM retinal layers; (2 coarse segmentation, in which 3D curvelet transform and graph theory are employed to get the possible candidate drusenoid regions; (3 fine segmentation, in which morphological operators are used to remove falsely extracted elongated structures and get the refined segmentation results. The proposed method was evaluated in 20 publically available volumetric scans acquired by using Bioptigen spectral-domain ophthalmic imaging system. The average true positive and false positive volume fractions (TPVF and FPVF for the segmentation of drusenoid regions were found to be 89.15% ± 3.76 and 0.17% ± .18%, respectively.

  5. Multimodal photoacoustic and optical coherence tomography scanner using an all optical detection scheme for 3D morphological skin imaging.

    Zhang, Edward Z; Povazay, Boris; Laufer, Jan; Alex, Aneesh; Hofer, Bernd; Pedley, Barbara; Glittenberg, Carl; Treeby, Bradley; Cox, Ben; Beard, Paul; Drexler, Wolfgang

    2011-08-01

    A noninvasive, multimodal photoacoustic and optical coherence tomography (PAT/OCT) scanner for three-dimensional in vivo (3D) skin imaging is described. The system employs an integrated, all optical detection scheme for both modalities in backward mode utilizing a shared 2D optical scanner with a field-of-view of ~13 × 13 mm(2). The photoacoustic waves were detected using a Fabry Perot polymer film ultrasound sensor placed on the surface of the skin. The sensor is transparent in the spectral range 590-1200 nm. This permits the photoacoustic excitation beam (670-680 nm) and the OCT probe beam (1050 nm) to be transmitted through the sensor head and into the underlying tissue thus providing a backward mode imaging configuration. The respective OCT and PAT axial resolutions were 8 and 20 µm and the lateral resolutions were 18 and 50-100 µm. The system provides greater penetration depth than previous combined PA/OCT devices due to the longer wavelength of the OCT beam (1050 nm rather than 829-870 nm) and by operating in the tomographic rather than the optical resolution mode of photoacoustic imaging. Three-dimensional in vivo images of the vasculature and the surrounding tissue micro-morphology in murine and human skin were acquired. These studies demonstrated the complementary contrast and tissue information provided by each modality for high-resolution 3D imaging of vascular structures to depths of up to 5 mm. Potential applications include characterizing skin conditions such as tumors, vascular lesions, soft tissue damage such as burns and wounds, inflammatory conditions such as dermatitis and other superficial tissue abnormalities.

  6. 3D automatic quantification applied to optically sectioned images to improve microscopy analysis

    JE Diaz-Zamboni

    2009-08-01

    Full Text Available New fluorescence microscopy techniques, such as confocal or digital deconvolution microscopy, allow to easily obtain three-dimensional (3D information from specimens. However, there are few 3D quantification tools that allow extracting information of these volumes. Therefore, the amount of information acquired by these techniques is difficult to manipulate and analyze manually. The present study describes a model-based method, which for the first time shows 3D visualization and quantification of fluorescent apoptotic body signals, from optical serial sections of porcine hepatocyte spheroids correlating them to their morphological structures. The method consists on an algorithm that counts apoptotic bodies in a spheroid structure and extracts information from them, such as their centroids in cartesian and radial coordinates, relative to the spheroid centre, and their integrated intensity. 3D visualization of the extracted information, allowed us to quantify the distribution of apoptotic bodies in three different zones of the spheroid.

  7. Combining 3D optical imaging and dual energy absorptiometry to measure three compositional components

    Malkov, Serghei; Shepherd, John

    2014-02-01

    We report on the design of the technique combining 3D optical imaging and dual-energy absorptiometry body scanning to estimate local body area compositions of three compartments. Dual-energy attenuation and body shape measures are used together to solve for the three compositional tissue thicknesses: water, lipid, and protein. We designed phantoms with tissue-like properties as our reference standards for calibration purposes. The calibration was created by fitting phantom values using non-linear regression of quadratic and truncated polynomials. Dual-energy measurements were performed on tissue-mimicking phantoms using a bone densitometer unit. The phantoms were made of materials shown to have similar x-ray attenuation properties of the biological compositional compartments. The components for the solid phantom were tested and their high energy/low energy attenuation ratios are in good correspondent to water, lipid, and protein for the densitometer x-ray region. The three-dimensional body shape was reconstructed from the depth maps generated by Microsoft Kinect for Windows. We used open-source Point Cloud Library and freeware software to produce dense point clouds. Accuracy and precision of compositional and thickness measures were calculated. The error contributions due to two modalities were estimated. The preliminary phantom composition and shape measurements are found to demonstrate the feasibility of the method proposed.

  8. Multicolor 3D super-resolution imaging by quantum dot stochastic optical reconstruction microscopy.

    Xu, Jianquan; Tehrani, Kayvan F; Kner, Peter

    2015-03-24

    We demonstrate multicolor three-dimensional super-resolution imaging with quantum dots (QSTORM). By combining quantum dot asynchronous spectral blueing with stochastic optical reconstruction microscopy and adaptive optics, we achieve three-dimensional imaging with 24 nm lateral and 37 nm axial resolution. By pairing two short-pass filters with two appropriate quantum dots, we are able to image single blueing quantum dots on two channels simultaneously, enabling multicolor imaging with high photon counts.

  9. High resolution 3D imaging of living cells with sub-optical wavelength phonons

    Pérez-Cota, Fernando; Smith, Richard J.; Moradi, Emilia; Marques, Leonel; Webb, Kevin F.; Clark, Matt

    2016-12-01

    Label-free imaging of living cells below the optical diffraction limit poses great challenges for optical microscopy. Biologically relevant structural information remains below the Rayleigh limit and beyond the reach of conventional microscopes. Super-resolution techniques are typically based on the non-linear and stochastic response of fluorescent labels which can be toxic and interfere with cell function. In this paper we present, for the first time, imaging of live cells using sub-optical wavelength phonons. The axial imaging resolution of our system is determined by the acoustic wavelength (λa = λprobe/2n) and not on the NA of the optics allowing sub-optical wavelength acoustic sectioning of samples using the time of flight. The transverse resolution is currently limited to the optical spot size. The contrast mechanism is significantly determined by the mechanical properties of the cells and requires no additional contrast agent, stain or label to image the cell structure. The ability to breach the optical diffraction limit to image living cells acoustically promises to bring a new suite of imaging technologies to bear in answering exigent questions in cell biology and biomedicine.

  10. Advanced 3-D Ultrasound Imaging

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... Field II simulations and measurements with the ultrasound research scanner SARUS and a 3.5MHz 1024 element 2-D transducer array. In all investigations, 3-D synthetic aperture imaging achieved a smaller main-lobe, lower sidelobes, higher contrast, and better signal to noise ratio than parallel...

  11. Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI).

    Dertinger, T; Colyer, R; Iyer, G; Weiss, S; Enderlein, J

    2009-12-29

    Super-resolution optical microscopy is a rapidly evolving area of fluorescence microscopy with a tremendous potential for impacting many fields of science. Several super-resolution methods have been developed over the last decade, all capable of overcoming the fundamental diffraction limit of light. We present here an approach for obtaining subdiffraction limit optical resolution in all three dimensions. This method relies on higher-order statistical analysis of temporal fluctuations (caused by fluorescence blinking/intermittency) recorded in a sequence of images (movie). We demonstrate a 5-fold improvement in spatial resolution by using a conventional wide-field microscope. This resolution enhancement is achieved in iterative discrete steps, which in turn allows the evaluation of images at different resolution levels. Even at the lowest level of resolution enhancement, our method features significant background reduction and thus contrast enhancement and is demonstrated on quantum dot-labeled microtubules of fibroblast cells.

  12. Optically clearing tissue as an initial step for 3D imaging of core biopsies to diagnose pancreatic cancer

    Das, Ronnie; Agrawal, Aishwarya; Upton, Melissa P.; Seibel, Eric J.

    2014-02-01

    The pancreas is a deeply seated organ requiring endoscopically, or radiologically guided biopsies for tissue diagnosis. Current approaches include either fine needle aspiration biopsy (FNA) for cytologic evaluation, or core needle biopsies (CBs), which comprise of tissue cores (L = 1-2 cm, D = 0.4-2.0 mm) for examination by brightfield microscopy. Between procurement and visualization, biospecimens must be processed, sectioned and mounted on glass slides for 2D visualization. Optical information about the native tissue state can be lost with each procedural step and a pathologist cannot appreciate 3D organization from 2D observations of tissue sections 1-8 μm in thickness. Therefore, how might histological disease assessment improve if entire, intact CBs could be imaged in both brightfield and 3D? CBs are mechanically delicate; therefore, a simple device was made to cut intact, simulated CBs (L = 1-2 cm, D = 0.2-0.8 mm) from porcine pancreas. After CBs were laid flat in a chamber, z-stack images at 20x and 40x were acquired through the sample with and without the application of an optical clearing agent (FocusClear®). Intensity of transmitted light increased by 5-15x and islet structures unique to pancreas were clearly visualized 250-300 μm beneath the tissue surface. CBs were then placed in index matching square capillary tubes filled with FocusClear® and a standard optical clearing agent. Brightfield z-stack images were then acquired to present 3D visualization of the CB to the pathologist.

  13. 3-D Vector Flow Imaging

    Holbek, Simon

    studies and in vivo. Phantom measurements are compared with their corresponding reference value, whereas the in vivo measurement is validated against the current golden standard for non-invasive blood velocity estimates, based on magnetic resonance imaging (MRI). The study concludes, that a high precision......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges......For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...

  14. 3D vector flow imaging

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  15. Towards a Noninvasive Intracranial Tumor Irradiation Using 3D Optical Imaging and Multimodal Data Registration

    Posada, R.; Daul, Ch.; Wolf, D.; Aletti, P.

    2007-01-01

    Conformal radiotherapy (CRT) results in high-precision tumor volume irradiation. In fractioned radiotherapy (FRT), lesions are irradiated in several sessions so that healthy neighbouring tissues are better preserved than when treatment is carried out in one fraction. In the case of intracranial tumors, classical methods of patient positioning in the irradiation machine coordinate system are invasive and only allow for CRT in one irradiation session. This contribution presents a noninvasive positioning method representing a first step towards the combination of CRT and FRT. The 3D data used for the positioning is point clouds spread over the patient's head (CT-data usually acquired during treatment) and points distributed over the patient's face which are acquired with a structured light sensor fixed in the therapy room. The geometrical transformation linking the coordinate systems of the diagnosis device (CT-modality) and the 3D sensor of the therapy room (visible light modality) is obtained by registering the surfaces represented by the two 3D point sets. The geometrical relationship between the coordinate systems of the 3D sensor and the irradiation machine is given by a calibration of the sensor position in the therapy room. The global transformation, computed with the two previous transformations, is sufficient to predict the tumor position in the irradiation machine coordinate system with only the corresponding position in the CT-coordinate system. Results obtained for a phantom show that the mean positioning error of tumors on the treatment machine isocentre is 0.4 mm. Tests performed with human data proved that the registration algorithm is accurate (0.1 mm mean distance between homologous points) and robust even for facial expression changes. PMID:18364992

  16. Intraoperative handheld probe for 3D imaging of pediatric benign vocal fold lesions using optical coherence tomography (Conference Presentation)

    Benboujja, Fouzi; Garcia, Jordan; Beaudette, Kathy; Strupler, Mathias; Hartnick, Christopher J.; Boudoux, Caroline

    2016-02-01

    Excessive and repetitive force applied on vocal fold tissue can induce benign vocal fold lesions. Children affected suffer from chronic hoarseness. In this instance, the vibratory ability of the folds, a complex layered microanatomy, becomes impaired. Histological findings have shown that lesions produce a remodeling of sup-epithelial vocal fold layers. However, our understanding of lesion features and development is still limited. Indeed, conventional imaging techniques do not allow a non-invasive assessment of sub-epithelial integrity of the vocal fold. Furthermore, it remains challenging to differentiate these sub-epithelial lesions (such as bilateral nodules, polyps and cysts) from a clinical perspective, as their outer surfaces are relatively similar. As treatment strategy differs for each lesion type, it is critical to efficiently differentiate sub-epithelial alterations involved in benign lesions. In this study, we developed an optical coherence tomography (OCT) based handheld probe suitable for pediatric laryngological imaging. The probe allows for rapid three-dimensional imaging of vocal fold lesions. The system is adapted to allow for high-resolution intra-operative imaging. We imaged 20 patients undergoing direct laryngoscopy during which we looked at different benign pediatric pathologies such as bilateral nodules, cysts and laryngeal papillomatosis and compared them to healthy tissue. We qualitatively and quantitatively characterized laryngeal pathologies and demonstrated the added advantage of using 3D OCT imaging for lesion discrimination and margin assessment. OCT evaluation of the integrity of the vocal cord could yield to a better pediatric management of laryngeal diseases.

  17. High-resolution 3-D imaging of surface damage sites in fused silica with Optical Coherence Tomography

    Guss, G; Bass, I; Hackel, R; Mailhiot, C; Demos, S G

    2007-10-30

    In this work, we present the first successful demonstration of a non-contact technique to precisely measure the 3D spatial characteristics of laser induced surface damage sites in fused silica for large aperture laser systems by employing Optical Coherence Tomography (OCT). What makes OCT particularly interesting in the characterization of optical materials for large aperture laser systems is that its axial resolution can be maintained with working distances greater than 5 cm, whether viewing through air or through the bulk of thick optics. Specifically, when mitigating surface damage sites against further growth by CO{sub 2} laser evaporation of the damage, it is important to know the depth of subsurface cracks below the damage site. These cracks are typically obscured by the damage rubble when imaged from above the surface. The results to date clearly demonstrate that OCT is a unique and valuable tool for characterizing damage sites before and after the mitigation process. We also demonstrated its utility as an in-situ diagnostic to guide and optimize our process when mitigating surface damage sites on large, high-value optics.

  18. Comparison of 3D double inversion recovery and 2D STIR FLAIR MR sequences for the imaging of optic neuritis: pilot study

    Hodel, Jerome; Bocher, Anne-Laure; Pruvo, Jean-Pierre; Leclerc, Xavier [Hopital Roger Salengro, Department of Neuroradiology, Lille (France); Outteryck, Olivier; Zephir, Helene; Vermersch, Patrick [Hopital Roger Salengro, Department of Neurology, Lille (France); Lambert, Oriane [Fondation Ophtalmologique Rothschild, Department of Neuroradiology, Paris (France); Benadjaoud, Mohamed Amine [Radiation Epidemiology Team, Inserm, CESP Centre for Research in Epidemiology and Population Health, U1018, Villejuif (France); Chechin, David [Philips Medical Systems, Suresnes (France)

    2014-12-15

    We compared the three-dimensional (3D) double inversion recovery (DIR) magnetic resonance imaging (MRI) sequence with the coronal two-dimensional (2D) short tau inversion recovery (STIR) fluid-attenuated inversion recovery (FLAIR) for the detection of optic nerve signal abnormality in patients with optic neuritis (ON). The study group consisted of 31 patients with ON (44 pathological nerves) confirmed by visual-evoked potentials used as the reference. MRI examinations included 2D coronal STIR FLAIR and 3D DIR with 3-mm coronal reformats to match with STIR FLAIR. Image artefacts were graded for each portion of the optic nerves. Each set of MR images (2D STIR FLAIR, DIR reformats and multiplanar 3D DIR) was examined independently and separately for the detection of signal abnormality. Cisternal portion of optic nerves was better delineated with DIR (p < 0.001), while artefacts impaired analysis in four patients with STIR FLAIR. Inter-observer agreement was significantly improved (p < 0.001) on 3D DIR (κ = 0.96) compared with STIR FLAIR images (κ = 0.60). Multiplanar DIR images reached the best performance for the diagnosis of ON (95 % sensitive and 94 % specific). Our study showed a high sensitivity and specificity of 3D DIR compared with STIR FLAIR for the detection of ON. These findings suggest that the 3D DIR sequence may be more useful in patients suspected of ON. (orig.)

  19. Short term reproducibility of a high contrast 3-D isotropic optic nerve imaging sequence in healthy controls

    Harrigan, Robert L.; Smith, Alex K.; Mawn, Louise A.; Smith, Seth A.; Landman, Bennett A.

    2016-03-01

    The optic nerve (ON) plays a crucial role in human vision transporting all visual information from the retina to the brain for higher order processing. There are many diseases that affect the ON structure such as optic neuritis, anterior ischemic optic neuropathy and multiple sclerosis. Because the ON is the sole pathway for visual information from the retina to areas of higher level processing, measures of ON damage have been shown to correlate well with visual deficits. Increased intracranial pressure has been shown to correlate with the size of the cerebrospinal fluid (CSF) surrounding the ON. These measures are generally taken at an arbitrary point along the nerve and do not account for changes along the length of the ON. We propose a high contrast and high-resolution 3-D acquired isotropic imaging sequence optimized for ON imaging. We have acquired scan-rescan data using the optimized sequence and a current standard of care protocol for 10 subjects. We show that this sequence has superior contrast-to-noise ratio to the current standard of care while achieving a factor of 11 higher resolution. We apply a previously published automatic pipeline to segment the ON and CSF sheath and measure the size of each individually. We show that these measures of ON size have lower short- term reproducibility than the population variance and the variability along the length of the nerve. We find that the proposed imaging protocol is (1) useful in detecting population differences and local changes and (2) a promising tool for investigating biomarkers related to structural changes of the ON.

  20. Localization of Objects Using the Ms Windows Kinect 3D Optical Device with Utilization of the Depth Image Technology

    Ján VACHÁLEK

    2015-11-01

    Full Text Available The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS. The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the indepth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our case, balls with fixed diameter were used as objects for 3D location.

  1. Localization of Objects Using the Ms Windows Kinect 3D Optical Device with Utilization of the Depth Image Technology

    Ján VACHÁLEK; Marian GÉCI; Oliver ROVNÝ; Tomáš VOLENSKÝ

    2015-01-01

    The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS). The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the indepth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our ca...

  2. High resolution 3-D wavelength diversity imaging

    Farhat, N. H.

    1981-09-01

    A physical optics, vector formulation of microwave imaging of perfectly conducting objects by wavelength and polarization diversity is presented. The results provide the theoretical basis for optimal data acquisition and three-dimensional tomographic image retrieval procedures. These include: (a) the selection of highly thinned (sparse) receiving array arrangements capable of collecting large amounts of information about remote scattering objects in a cost effective manner and (b) techniques for 3-D tomographic image reconstruction and display in which polarization diversity data is fully accounted for. Data acquisition employing a highly attractive AMTDR (Amplitude Modulated Target Derived Reference) technique is discussed and demonstrated by computer simulation. Equipment configuration for the implementation of the AMTDR technique is also given together with a measurement configuration for the implementation of wavelength diversity imaging in a roof experiment aimed at imaging a passing aircraft. Extension of the theory presented to 3-D tomographic imaging of passive noise emitting objects by spectrally selective far field cross-correlation measurements is also given. Finally several refinements made in our anechoic-chamber measurement system are shown to yield drastic improvement in performance and retrieved image quality.

  3. Three-Dimensional Optical Coherence Tomography (3D OCT) Project

    National Aeronautics and Space Administration — Applied Science Innovations, Inc. proposes to develop a new tool of 3D optical coherence tomography (OCT) for cellular level imaging at video frame rates and...

  4. Three-Dimensional Optical Coherence Tomography (3D OCT) Project

    National Aeronautics and Space Administration — Applied Science Innovations, Inc. proposes a new tool of 3D optical coherence tomography (OCT) for cellular level imaging at video frame rates and dramatically...

  5. Light field display and 3D image reconstruction

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  6. Multi-resolution optical 3D sensor

    Kühmstedt, Peter; Heinze, Matthias; Schmidt, Ingo; Breitbarth, Martin; Notni, Gunther

    2007-06-01

    A new multi resolution self calibrating optical 3D measurement system using fringe projection technique named "kolibri FLEX multi" will be presented. It can be utilised to acquire the all around shape of small to medium objects, simultaneously. The basic measurement principle is the phasogrammetric approach /1,2,3/ in combination with the method of virtual landmarks for the merging of the 3D single views. The system consists in minimum of two fringe projection sensors. The sensors are mounted on a rotation stage illuminating the object from different directions. The measurement fields of the sensors can be chosen different, here as an example 40mm and 180mm in diameter. In the measurement the object can be scanned at the same time with these two resolutions. Using the method of virtual landmarks both point clouds are calculated within the same world coordinate system resulting in a common 3D-point cloud. The final point cloud includes the overview of the object with low point density (wide field) and a region with high point density (focussed view) at the same time. The advantage of the new method is the possibility to measure with different resolutions at the same object region without any mechanical changes in the system or data post processing. Typical parameters of the system are: the measurement time is 2min for 12 images and the measurement accuracy is below 3μm up to 10 μm. The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  7. Research of 3D display using anamorphic optics

    Matsumoto, Kenji; Honda, Toshio

    1997-05-01

    This paper describes the auto-stereoscopic display which can reconstruct more reality and viewer friendly 3-D image by increasing the number of parallaxes and giving motion parallax horizontally. It is difficult to increase number of parallaxes to give motion parallax to the 3-D image without reducing the resolution, because the resolution of display device is insufficient. The magnification and the image formation position can be selected independently in horizontal direction and the vertical direction by projecting between the display device and the 3-D image with the anamorphic optics. The anamorphic optics is an optics system with different magnification in horizontal direction and the vertical direction. It consists of the combination of cylindrical lenses with different focal length. By using this optics, even if we use a dynamic display such as liquid crystal display (LCD), it is possible to display the realistic 3-D image having motion parallax. Motion parallax is obtained by assuming width of the single parallax at the viewing position to be about the same size as the pupil diameter of viewer. In addition, because the focus depth of the 3-D image is deep in this method, conflict of accommodation and convergence is small, and natural 3-D image can be displayed.

  8. SU-E-T-296: Dosimetric Analysis of Small Animal Image-Guided Irradiator Using High Resolution Optical CT Imaging of 3D Dosimeters

    Na, Y; Qian, X; Wuu, C [Columbia University, New York, NY (United States); Adamovics, J [John Adamovics, Skillman, NJ (United States)

    2015-06-15

    Purpose: To verify the dosimetric characteristics of a small animal image-guided irradiator using a high-resolution of optical CT imaging of 3D dosimeters. Methods: PRESAEGE 3D dosimeters were used to determine dosimetric characteristics of a small animal image-guided irradiator and compared with EBT2 films. Cylindrical PRESAGE dosimeters with 7cm height and 6cm diameter were placed along the central axis of the beam. The films were positioned between 6×6cm{sup 2} cubed plastic water phantoms perpendicular to the beam direction with multiple depths. PRESAGE dosimeters and EBT2 films were then irradiated with the irradiator beams at 220kVp and 13mA. Each of irradiated PRESAGE dosimeters named PA1, PA2, PB1, and PB2, was independently scanned using a high-resolution single laser beam optical CT scanner. The transverse images were reconstructed with a 0.1mm high-resolution pixel. A commercial Epson Expression 10000XL flatbed scanner was used for readout of irradiated EBT2 films at a 0.4mm pixel resolution. PDD curves and beam profiles were measured for the irradiated PRESAGE dosimeters and EBT2 films. Results: The PDD agreements between the irradiated PRESAGE dosimeter PA1, PA2, PB1, PB2 and the EB2 films were 1.7, 2.3, 1.9, and 1.9% for the multiple depths at 1, 5, 10, 15, 20, 30, 40 and 50mm, respectively. The FWHM measurements for each PRESAEGE dosimeter and film agreed with 0.5, 1.1, 0.4, and 1.7%, respectively, at 30mm depth. Both PDD and FWHM measurements for the PRESAGE dosimeters and the films agreed overall within 2%. The 20%–80% penumbral widths of each PRESAGE dosimeter and the film at a given depth were respectively found to be 0.97, 0.91, 0.79, 0.88, and 0.37mm. Conclusion: Dosimetric characteristics of a small animal image-guided irradiator have been demonstrated with the measurements of PRESAGE dosimeter and EB2 film. With the high resolution and accuracy obtained from this 3D dosimetry system, precise targeting small animal irradiation can be

  9. FIT3D: Fitting optical spectra

    Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosales-Ortega, F. F.; Cano-Díaz, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.

    2016-09-01

    FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.

  10. 3D micro-particle image modeling and its application in measurement resolution investigation for visual sensing based axial localization in an optical microscope

    Wang, Yuliang; Li, Xiaolai; Bi, Shusheng; Zhu, Xiaofeng; Liu, Jinhua

    2017-01-01

    Visual sensing based three dimensional (3D) particle localization in an optical microscope is important for both fundamental studies and practical applications. Compared with the lateral (X and Y) localization, it is more challenging to achieve a high resolution measurement of axial particle location. In this study, we aim to investigate the effect of different factors on axial measurement resolution through an analytical approach. Analytical models were developed to simulate 3D particle imaging in an optical microscope. A radius vector projection method was applied to convert the simulated particle images into radius vectors. With the obtained radius vectors, a term of axial changing rate was proposed to evaluate the measurement resolution of axial particle localization. Experiments were also conducted for comparison with that obtained through simulation. Moreover, with the proposed method, the effects of particle size on measurement resolution were discussed. The results show that the method provides an efficient approach to investigate the resolution of axial particle localization.

  11. 3D Backscatter Imaging System

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  12. 3D nanopillar optical antenna photodetectors.

    Senanayake, Pradeep; Hung, Chung-Hong; Shapiro, Joshua; Scofield, Adam; Lin, Andrew; Williams, Benjamin S; Huffaker, Diana L

    2012-11-05

    We demonstrate 3D surface plasmon photoresponse in nanopillar arrays resulting in enhanced responsivity due to both Localized Surface Plasmon Resonances (LSPRs) and Surface Plasmon Polariton Bloch Waves (SPP-BWs). The LSPRs are excited due to a partial gold shell coating the nanopillar which acts as a 3D Nanopillar Optical Antenna (NOA) in focusing light into the nanopillar. Angular photoresponse measurements show that SPP-BWs can be spectrally coincident with LSPRs to result in a x2 enhancement in responsivity at 1180 nm. Full-wave Finite Difference Time Domain (FDTD) simulations substantiate both the spatial and spectral coupling of the SPP-BW / LSPR for enhanced absorption and the nature of the LSPR. Geometrical control of the 3D NOA and the self-aligned metal hole lattice allows the hybridization of both localized and propagating surface plasmon modes for enhanced absorption. Hybridized plasmonic modes opens up new avenues in optical antenna design in nanoscale photodetectors.

  13. Optical coherence tomography for ultrahigh-resolution 3D imaging of cell development and real-time guiding for photodynamic therapy

    Wang, Tianshi; Zhen, Jinggao; Wang, Bo; Xue, Ping

    2009-11-01

    Optical coherence tomography is a new emerging technique for cross-sectional imaging with high spatial resolution of micrometer scale. It enables in vivo and non-invasive imaging with no need to contact the sample and is widely used in biological and clinic application. In this paper optical coherence tomography is demonstrated for both biological and clinic applications. For biological application, a white-light interference microscope is developed for ultrahigh-resolution full-field optical coherence tomography (full-field OCT) to implement 3D imaging of biological tissue. Spatial resolution of 0.9μm×1.1μm (transverse×axial) is achieved A system sensitivity of 85 dB is obtained at an acquisition time of 5s per image. The development of a mouse embryo is studied layer by layer with our ultrahigh-resolution full-filed OCT. For clinic application, a handheld optical coherence tomography system is designed for real-time and in situ imaging of the port wine stains (PWS) patient and supplying surgery guidance for photodynamic therapy (PDT) treatment. The light source with center wavelength of 1310nm, -3 dB wavelength range of 90 nm and optical power of 9mw is utilized. Lateral resolution of 8 μm and axial resolution of 7μm at a rate of 2 frames per second and with 102dB sensitivity are achieved in biological tissue. It is shown that OCT images distinguish very well the normal and PWS tissues in clinic and are good to serve as a valuable diagnosis tool for PDT treatment.

  14. Dynamic contrast-enhanced 3D photoacoustic imaging

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  15. 3D Reconstruction of NMR Images

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  16. 3D imaging in forensic odontology.

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  17. 3D laser imaging for concealed object identification

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  18. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina.

    Zawadzki, Robert J; Zhang, Pengfei; Zam, Azhar; Miller, Eric B; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G; Werner, John S; Burns, Marie E; Pugh, Edward N

    2015-06-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed.

  19. Research on the aero-thermal effects by 3D analysis model of the optical window of the infrared imaging guidance

    Xu, Bo; Li, Lin; Zhu, Ying

    2014-11-01

    Researches on hypersonic vehicles have been a hotspot in the field of aerospace because of the pursuits for higher speed by human being. Infrared imaging guidance is playing a very important role in modern warfare. When an Infrared Ray(IR) imaging guided missile is flying in the air at high speed, its optical dome suffers from serious aero-optic effects because of air flow. The turbulence around the dome and the thermal effects of the optical window would cause disturbance to the wavefront from the target. Therefore, detected images will be biased, dithered and blurred, and the capabilities of the seeker for detecting, tracking and recognizing are weakened. In this paper, methods for thermal and structural analysis with Heat Transfer and Elastic Mechanics are introduced. By studying the aero-thermal effects and aero-thermal radiation effects of the optical window, a 3D analysis model of the optical window is established by using finite element method. The direct coupling analysis is employed as a solving strategy. The variation regularity of the temperature field is obtained. For light with different incident angles, the influence on the ray propagation caused by window deformation is analyzed with theoretical calculation and optical/thermal/structural integrated analysis method respectively.

  20. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  1. Structured light field 3D imaging.

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-05

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

  2. Spatially-resolved in-situ quantification of biofouling using optical coherence tomography (OCT) and 3D image analysis in a spacer filled channel

    Fortunato, Luca

    2016-11-21

    The use of optical coherence tomography (OCT) to investigate biomass in membrane systems has increased with time. OCT is able to characterize the biomass in-situ and non-destructively. In this study, a novel approach to process three-dimensional (3D) OCT scans is proposed. The approach allows obtaining spatially-resolved detailed structural biomass information. The 3D biomass reconstruction enables analysis of the biomass only, obtained by subtracting the time zero scan to all images. A 3D time series analysis of biomass development in a spacer filled channel under representative conditions (cross flow velocity) for a spiral wound membrane element was performed. The flow cell was operated for five days with monitoring of ultrafiltration membrane performance: feed channel pressure drop and permeate flux. The biomass development in the flow cell was detected by OCT before a performance decline was observed. Feed channel pressure drop continuously increased with increasing biomass volume, while flux decline was mainly affected in the initial phase of biomass accumulation. The novel OCT imaging approach enabled the assessment of spatial biomass distribution in the flow cell, discriminating the total biomass volume between the membrane, feed spacer and glass window. Biomass accumulation was stronger on the feed spacer during the early stage of biofouling, impacting the feed channel pressure drop stronger than permeate flux.

  3. Automatic respiration tracking for radiotherapy using optical 3D camera

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  4. Manufacturing: 3D printed micro-optics

    Juodkazis, Saulius

    2016-08-01

    Uncompromised performance of micro-optical compound lenses has been achieved by high-fidelity shape definition during two-photon absorption microfabrication. The lenses have been made directly onto image sensors and even onto the tip of an optic fibre.

  5. Heat Equation to 3D Image Segmentation

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  6. Diffractive centrosymmetric 3D-transmission phase gratings positioned at the image plane of optical systems transform lightlike 4D-WORLD as tunable resonators into spectral metrics...

    Lauinger, Norbert

    1999-08-01

    Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.

  7. Micromachined Ultrasonic Transducers for 3-D Imaging

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... of state-of-the-art 3-D ultrasound systems. The focus is on row-column addressed transducer arrays. This previously sparsely investigated addressing scheme offers a highly reduced number of transducer elements, resulting in reduced transducer manufacturing costs and data processing. To produce...

  8. 3D Membrane Imaging and Porosity Visualization

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  9. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  10. 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping...... of planetary surfaces, but other purposes is considered as well. The system performance is measured with respect to the precision and the time consumption.The reconstruction process is divided into four major areas: Acquisition, calibration, matching/reconstruction and presentation. Each of these areas...... are treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  11. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  12. Imaging mesenchymal stem cells containing single wall nanotube nanoprobes in a 3D scaffold using photo-thermal optical coherence tomography

    Connolly, Emma; Subhash, Hrebesh M.; Leahy, Martin; Rooney, Niall; Barry, Frank; Murphy, Mary; Barron, Valerie

    2014-02-01

    Despite the fact, that a range of clinically viable imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), photo emission tomography (PET), ultrasound and bioluminescence imaging are being optimised to track cells in vivo, many of these techniques are subject to limitations such as the levels of contrast agent required, toxic effects of radiotracers, photo attenuation of tissue and backscatter. With the advent of nanotechnology, nanoprobes are leading the charge to overcome these limitations. In particular, single wall nanotubes (SWNT) have been shown to be taken up by cells and as such are effective nanoprobes for cell imaging. Consequently, the main aim of this research is to employ mesenchymal stem cells (MSC) containing SWNT nanoprobes to image cell distribution in a 3D scaffold for cartilage repair. To this end, MSC were cultured in the presence of 32μg/ml SWNT in cell culture medium (αMEM, 10% FBS, 1% penicillin/streptomycin) for 24 hours. Upon confirmation of cell viability, the MSC containing SWNT were encapsulated in hyaluronic acid gels and loaded on polylactic acid polycaprolactone scaffolds. After 28 days in complete chondrogenic medium, with medium changes every 2 days, chondrogenesis was confirmed by the presence of glycosaminoglycan. Moreover, using photothermal optical coherence tomography (PT-OCT), the cells were seen to be distributed through the scaffold with high resolution. In summary, these data reveal that MSC containing SWNT nanoprobes in combination with PT-OCT offer an exciting opportunity for stem cell tracking in vitro for assessing seeding scaffolds and in vivo for determining biodistribution.

  13. 3D Human cartilage surface characterization by optical coherence tomography

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D

  14. Volumetric (3D) compressive sensing spectral domain optical coherence tomography.

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-11-01

    In this work, we proposed a novel three-dimensional compressive sensing (CS) approach for spectral domain optical coherence tomography (SD OCT) volumetric image acquisition and reconstruction. Instead of taking a spectral volume whose size is the same as that of the volumetric image, our method uses a sub set of the original spectral volume that is under-sampled in all three dimensions, which reduces the amount of spectral measurements to less than 20% of that required by the Shan-non/Nyquist theory. The 3D image is recovered from the under-sampled spectral data dimension-by-dimension using the proposed three-step CS reconstruction strategy. Experimental results show that our method can significantly reduce the sampling rate required for a volumetric SD OCT image while preserving the image quality.

  15. Photogrammetric 3D reconstruction using mobile imaging

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  16. Model-based optical metrology and visualization of 3-D complex objects

    LIU Xiao-li; LI A-meng; ZHAO Xiao-bo; GAO Peng-dong; TIAN Jin-dong; PENG Xiang

    2007-01-01

    This letter addresses several key issues in the process of model-based optical metrology, including three dimensional (3D) sensing, calibration, registration and fusion of range images, geometric representation, and visualization of reconstructed 3D model by taking into account the shape measurement of 3D complex structures,and some experimental results are presented.

  17. The Atlas-3D project - IX. The merger origin of a fast and a slow rotating Early-Type Galaxy revealed with deep optical imaging: first results

    Duc, Pierre-Alain; Serra, Paolo; Michel-Dansac, Leo; Ferriere, Etienne; Alatalo, Katherine; Blitz, Leo; Bois, Maxime; Bournaud, Frederic; Bureau, Martin; Cappellari, Michele; Davies, Roger L; Davis, Timothy A; de Zeeuw, P T; Emsellem, Eric; Khochfar, Sadegh; Krajnovic, Davor; Kuntschner, Harald; Lablanche, Pierre-Yves; McDermid, Richard M; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Sarzi, Marc; Scott, Nicholas; Weijmans, Anne-Marie; Young, Lisa M

    2011-01-01

    The mass assembly of galaxies leaves imprints in their outskirts, such as shells and tidal tails. The frequency and properties of such fine structures depend on the main acting mechanisms - secular evolution, minor or major mergers - and on the age of the last substantial accretion event. We use this to constrain the mass assembly history of two apparently relaxed nearby Early-Type Galaxies (ETGs) selected from the Atlas-3D sample, NGC 680 and NGC 5557. Our ultra deep optical images obtained with MegaCam on the Canada-France-Hawaii Telescope reach 29 mag/arcsec^2 in the g-band. They reveal very low-surface brightness (LSB) filamentary structures around these ellipticals. Among them, a gigantic 160 kpc long tail East of NGC 5557 hosts gas-rich star-forming objects. NGC 680 exhibits two major diffuse plumes apparently connected to extended HI tails, as well as a series of arcs and shells. Comparing the outer stellar and gaseous morphology of the two ellipticals with that predicted from models of colliding galax...

  18. 3D Image Reconstruction from Compton camera data

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  19. Optical characterization of different types of 3D displays

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    All 3D displays have the same intrinsic method to induce depth perception. They provide different images in the left and right eye of the observer to obtain the stereoscopic effect. The three most common solutions already available on the market are active glass, passive glass and auto-stereoscopic 3D displays. The three types of displays are based on different physical principle (polarization, time selection or spatial emission) and consequently require different measurement instruments and techniques. In the proposed paper, we present some of these solutions and the technical characteristics that can be obtained to compare the displays. We show in particular that local and global measurements can be made in the three cases to access to different characteristics. We also discuss the new technologies currently under development and their needs in terms of optical characterization.

  20. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  1. Progress in 3D imaging and display by integral imaging

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  2. Test target for characterizing 3D resolution of optical coherence tomography

    Hu, Zhixiong; Hao, Bingtao; Liu, Wenli; Hong, Baoyu; Li, Jiao

    2014-12-01

    Optical coherence tomography (OCT) is a non-invasive 3D imaging technology which has been applied or investigated in many diagnostic fields including ophthalmology, dermatology, dentistry, cardiovasology, endoscopy, brain imaging and so on. Optical resolution is an important characteristic that can describe the quality and utility of an image acquiring system. We employ 3D printing technology to design and fabricate a test target for characterizing 3D resolution of optical coherence tomography. The test target which mimics USAF 1951 test chart was produced with photopolymer. By measuring the 3D test target, axial resolution as well as lateral resolution of a spectral domain OCT system was evaluated. For comparison, conventional microscope and surface profiler were employed to characterize the 3D test targets. The results demonstrate that the 3D resolution test targets have the potential of qualitatively and quantitatively validating the performance of OCT systems.

  3. 3D/2D Registration of medical images

    Tomaževič, D.

    2008-01-01

    The topic of this doctoral dissertation is registration of 3D medical images to corresponding projective 2D images, referred to as 3D/2D registration. There are numerous possible applications of 3D/2D registration in image-aided diagnosis and treatment. In most of the applications, 3D/2D registration provides the location and orientation of the structures in a preoperative 3D CT or MR image with respect to intraoperative 2D X-ray images. The proposed doctoral dissertation tries to find origin...

  4. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  5. Combining Different Modalities for 3D Imaging of Biological Objects

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  6. Super deep 3D images from a 3D omnifocus video camera.

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  7. A system for finding a 3D target without a 3D image

    West, Jay B.; Maurer, Calvin R., Jr.

    2008-03-01

    We present here a framework for a system that tracks one or more 3D anatomical targets without the need for a preoperative 3D image. Multiple 2D projection images are taken using a tracked, calibrated fluoroscope. The user manually locates each target on each of the fluoroscopic views. A least-squares minimization algorithm triangulates the best-fit position of each target in the 3D space of the tracking system: using the known projection matrices from 3D space into image space, we use matrix minimization to find the 3D position that projects closest to the located target positions in the 2D images. A tracked endoscope, whose projection geometry has been pre-calibrated, is then introduced to the operating field. Because the position of the targets in the tracking space is known, a rendering of the targets may be projected onto the endoscope view, thus allowing the endoscope to be easily brought into the target vicinity even when the endoscope field of view is blocked, e.g. by blood or tissue. An example application for such a device is trauma surgery, e.g., removal of a foreign object. Time, scheduling considerations and concern about excessive radiation exposure may prohibit the acquisition of a 3D image, such as a CT scan, which is required for traditional image guidance systems; it is however advantageous to have 3D information about the target locations available, which is not possible using fluoroscopic guidance alone.

  8. Optical experiments on 3D photonic crystals

    Koenderink, F.; Vos, W.

    2003-01-01

    Photonic crystals are optical materials that have an intricate structure with length scales of the order of the wavelength of light. The flow of photons is controlled in a manner analogous to how electrons propagate through semiconductor crystals, i.e., by Bragg diffraction and the formation of band

  9. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  10. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  11. Glasses-free 3D viewing systems for medical imaging

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  12. Joint calibration of 3D resist image and CDSEM

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  13. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  14. Dynamic 3D computed tomography scanner for vascular imaging

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  15. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  16. The Atlas3D project -- XXIX. The new look of early-type galaxies and surrounding fields disclosed by extremely deep optical images

    Duc, Pierre-Alain; Karabal, Emin; Cappellari, Michele; Alatalo, Katherine; Blitz, Leo; Bournaud, Frederic; Bureau, Martin; Crocker, Alison F; Davies, Roger L; Davis, Timothy A; de Zeeuw, P T; Emsellem, Eric; Khochfar, Sadegh; Krajnovic, Davor; Kuntschner, Harald; McDermid, Richard M; Michel-Dansac, Leo; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Paudel, Sanjaya; Sarzi, Marc; Scott, Nicholas; Serra, Paolo; Weijmans, Anne-Marie; Young, Lisa M

    2014-01-01

    Galactic archeology based on star counts is instrumental to reconstruct the past mass assembly of Local Group galaxies. The development of new observing techniques and data-reduction, coupled with the use of sensitive large field of view cameras, now allows us to pursue this technique in more distant galaxies exploiting their diffuse low surface brightness (LSB) light. As part of the Atlas3D project, we have obtained with the MegaCam camera at the Canada-France Hawaii Telescope extremely deep, multi--band, images of nearby early-type galaxies. We present here a catalog of 92 galaxies from the Atlas3D sample, that are located in low to medium density environments. The observing strategy and data reduction pipeline, that achieve a gain of several magnitudes in the limiting surface brightness with respect to classical imaging surveys, are presented. The size and depth of the survey is compared to other recent deep imaging projects. The paper highlights the capability of LSB--optimized surveys at detecting new pr...

  17. A 3D Optical Metamaterial Made by Self-Assembly

    Vignolini, Silvia

    2011-10-24

    Optical metamaterials have unusual optical characteristics that arise from their periodic nanostructure. Their manufacture requires the assembly of 3D architectures with structure control on the 10-nm length scale. Such a 3D optical metamaterial, based on the replication of a self-assembled block copolymer into gold, is demonstrated. The resulting gold replica has a feature size that is two orders of magnitude smaller than the wavelength of visible light. Its optical signature reveals an archetypal Pendry wire metamaterial with linear and circular dichroism. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Parallel Processor for 3D Recovery from Optical Flow

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  19. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  20. Development of 3D microwave imaging reflectometry in LHD (invited).

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  1. Improved 3D Superresolution Localization Microscopy Using Adaptive Optics

    Piro, Nicolas; Olivier, Nicolas; Manley, Suliana

    2014-01-01

    We demonstrate a new versatile method for 3D super-resolution microscopy by using a deformable mirror to shape the point spread function of our microscope in a continuous and controllable way. We apply this for 3D STORM imaging of microtubules.

  2. FELIX 3D display: an interactive tool for volumetric imaging

    Langhans, Knut; Bahr, Detlef; Bezecny, Daniel; Homann, Dennis; Oltmann, Klaas; Oltmann, Krischan; Guill, Christian; Rieper, Elisabeth; Ardey, Goetz

    2002-05-01

    The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.

  3. The ATLAS3D project - XXIX. The new look of early-type galaxies and surrounding fields disclosed by extremely deep optical images

    Duc, Pierre-Alain; Cuillandre, Jean-Charles; Karabal, Emin; Cappellari, Michele; Alatalo, Katherine; Blitz, Leo; Bournaud, Frédéric; Bureau, Martin; Crocker, Alison F.; Davies, Roger L.; Davis, Timothy A.; de Zeeuw, P. T.; Emsellem, Eric; Khochfar, Sadegh; Krajnović, Davor; Kuntschner, Harald; McDermid, Richard M.; Michel-Dansac, Leo; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Paudel, Sanjaya; Sarzi, Marc; Scott, Nicholas; Serra, Paolo; Weijmans, Anne-Marie; Young, Lisa M.

    2015-01-01

    Galactic archaeology based on star counts is instrumental to reconstruct the past mass assembly of Local Group galaxies. The development of new observing techniques and data reduction, coupled with the use of sensitive large field of view cameras, now allows us to pursue this technique in more distant galaxies exploiting their diffuse low surface brightness (LSB) light. As part of the ATLAS3D project, we have obtained with the MegaCam camera at the Canada-France-Hawaii Telescope extremely deep, multiband images of nearby early-type galaxies (ETGs). We present here a catalogue of 92 galaxies from the ATLAS3D sample, which are located in low- to medium-density environments. The observing strategy and data reduction pipeline, which achieve a gain of several magnitudes in the limiting surface brightness with respect to classical imaging surveys, are presented. The size and depth of the survey are compared to other recent deep imaging projects. The paper highlights the capability of LSB-optimized surveys at detecting new prominent structures that change the apparent morphology of galaxies. The intrinsic limitations of deep imaging observations are also discussed, among those, the contamination of the stellar haloes of galaxies by extended ghost reflections, and the cirrus emission from Galactic dust. The detection and systematic census of fine structures that trace the present and past mass assembly of ETGs are one of the prime goals of the project. We provide specific examples of each type of observed structures - tidal tails, stellar streams and shells - and explain how they were identified and classified. We give an overview of the initial results. The detailed statistical analysis will be presented in future papers.

  4. 3D imaging and wavefront sensing with a plenoptic objective

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  5. Full Parallax Integral 3D Display and Image Processing Techniques

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  6. 3D Imaging with Structured Illumination for Advanced Security Applications

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  7. 3D passive integral imaging using compressive sensing.

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

    2012-11-19

    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  8. An Optically Controlled 3D Cell Culturing System

    Kelly S. Ishii

    2011-01-01

    Full Text Available A novel 3D cell culture system was developed and tested. The cell culture device consists of a microfluidic chamber on an optically absorbing substrate. Cells are suspended in a thermoresponsive hydrogel solution, and optical patterns are utilized to heat the solution, producing localized hydrogel formation around cells of interest. The hydrogel traps only the desired cells in place while also serving as a biocompatible scaffold for supporting the cultivation of cells in 3D. This is demonstrated with the trapping of MDCK II and HeLa cells. The light intensity from the optically induced hydrogel formation does not significantly affect cell viability.

  9. 3D Objects Reconstruction from Image Data

    Cír, Filip

    2008-01-01

    Tato práce se zabývá 3D rekonstrukcí z obrazových dat. Jsou popsány možnosti a přístupy k optickému skenování. Ruční optický 3D skener se skládá z kamery a zdroje čárového laseru, který je vzhledem ke kameře upevněn pod určitým úhlem. Je navržena vhodná podložka se značkami a je popsán algoritmus pro jejich real-time detekci. Po detekci značek lze vypočítat pozici a orientaci kamery. Na závěr je popsána detekce laseru a postup při výpočtu bodů na povrchu objektu pomocí triangulace. This pa...

  10. 3D augmented reality with integral imaging display

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  11. 3D simulation for solitons used in optical fibers

    Vasile, F.; Tebeica, C. M.; Schiopu, P.; Vladescu, M.

    2016-12-01

    In this paper is described 3D simulation for solitions used in optical fibers. In the scientific works is started from nonlinear propagation equation and the solitons represents its solutions. This paper presents the simulation of the fundamental soliton in 3D together with simulation of the second order soliton in 3D. These simulations help in the study of the optical fibers for long distances and in the interactions between the solitons. This study helps the understanding of the nonlinear propagation equation and for nonlinear waves. These 3D simulations are obtained using MATLAB programming language, and we can observe fundamental difference between the soliton and the second order/higher order soliton and in their evolution.

  12. De la manipulation des images 3D

    Geneviève Pinçon

    2012-04-01

    Full Text Available Si les technologies 3D livrent un enregistrement précis et pertinent des graphismes pariétaux, elles offrent également des applications particulièrement intéressantes pour leur analyse. À travers des traitements sur nuage de points et des simulations, elles autorisent un large éventail de manipulations touchant autant à l’observation qu’à l’étude des œuvres pariétales. Elles permettent notamment une perception affinée de leur volumétrie, et deviennent des outils de comparaison de formes très utiles dans la reconstruction des chronologies pariétales et dans l’appréhension des analogies entre différents sites. Ces outils analytiques sont ici illustrés par les travaux originaux menés sur les sculptures pariétales des abris du Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne et de la Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente.If 3D technologies allow an accurate and relevant recording of rock art, they also offer several interesting applications for its analysis. Through spots clouds treatments and simulations, they permit a wide range of manipulations concerning figurations observation and study. Especially, they allow a fine perception of their volumetry. They become efficient tools for forms comparisons, very useful in the reconstruction of graphic ensemble chronologies and for inter-sites analogies. These analytical tools are illustrated by the original works done on the sculptures of Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne and Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente rock-shelters.

  13. Optical 3D sensor for large objects in industrial application

    Kuhmstedt, Peter; Heinze, Matthias; Himmelreich, Michael; Brauer-Burchardt, Christian; Brakhage, Peter; Notni, Gunther

    2005-06-01

    A new self calibrating optical 3D measurement system using fringe projection technique named "kolibri 1500" is presented. It can be utilised to acquire the all around shape of large objects. The basic measuring principle is the phasogrammetric approach introduced by the authors /1, 2/. The "kolibri 1500" consists of a stationary system with a translation unit for handling of objects. Automatic whole body measurement is achieved by using sensor head rotation and changeable object position, which can be done completely computer controlled. Multi-view measurement is realised by using the concept of virtual reference points. In this way no matching procedures or markers are necessary for the registration of the different images. This makes the system very flexible to realise different measurement tasks. Furthermore, due to self calibrating principle mechanical alterations are compensated. Typical parameters of the system are: the measurement volume extends from 400 mm up to 1500 mm max. length, the measurement time is between 2 min for 12 images up to 20 min for 36 images and the measurement accuracy is below 50μm.The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  14. Calibration of Images with 3D range scanner data

    Adalid López, Víctor Javier

    2009-01-01

    Projecte fet en col.laboració amb EPFL 3D laser range scanners are used in extraction of the 3D data in a scene. Main application areas are architecture, archeology and city planning. Thought the raw scanner data has a gray scale values, the 3D data can be merged with colour camera image values to get textured 3D model of the scene. Also these devices are able to take a reliable copy in 3D form objects, with a high level of accuracy. Therefore, they scanned scenes can be use...

  15. 3D Ground Penetrating Imaging Radar

    ECT Team, Purdue

    2007-01-01

    GPiR (ground-penetrating imaging radar) is a new technology for mapping the shallow subsurface, including society’s underground infrastructure. Applications for this technology include efficient and precise mapping of buried utilities on a large scale.

  16. Visualizing Vertebrate Embryos with Episcopic 3D Imaging Techniques

    Stefan H. Geyer

    2009-01-01

    Full Text Available The creation of highly detailed, three-dimensional (3D computer models is essential in order to understand the evolution and development of vertebrate embryos, and the pathogenesis of hereditary diseases. A still-increasing number of methods allow for generating digital volume data sets as the basis of virtual 3D computer models. This work aims to provide a brief overview about modern volume data–generation techniques, focusing on episcopic 3D imaging methods. The technical principles, advantages, and problems of episcopic 3D imaging are described. The strengths and weaknesses in its ability to visualize embryo anatomy and labeled gene product patterns, specifically, are discussed.

  17. Compression of 3D integral images using wavelet decomposition

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  18. Highway 3D model from image and lidar data

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  19. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  20. Holographic Image Plane Projection Integral 3D Display

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  1. 3D imaging of neutron tracks using confocal microscopy

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  2. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  3. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  4. 3D flash lidar imager onboard UAV

    Zhou, Guoqing; Liu, Yilong; Yang, Jiazhi; Zhang, Rongting; Su, Chengjie; Shi, Yujun; Zhou, Xiang

    2014-11-01

    A new generation of flash LiDAR sensor called GLidar-I is presented in this paper. The GLidar-I has been being developed by Guilin University of Technology in cooperating with the Guilin Institute of Optical Communications. The GLidar-I consists of control and process system, transmitting system and receiving system. Each of components has been designed and implemented. The test, experiments and validation for each component have been conducted. The experimental results demonstrate that the researched and developed GLiDAR-I can effectively measure the distance about 13 m at the accuracy level about 11cm in lab.

  5. Acoustic 3D imaging of dental structures

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  6. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  7. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  8. 3D Motion Parameters Determination Based on Binocular Sequence Images

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  9. 3D Shape Indexing and Retrieval Using Characteristics level images

    Abdelghni Lakehal

    2012-05-01

    Full Text Available In this paper, we propose an improved version of the descriptor that we proposed before. The descriptor is based on a set of binary images extracted from the 3D model called level images noted LI. The set LI is often bulky, why we introduced the X-means technique to reduce its size instead of K-means used in the old version. A 2D binary image descriptor was introduced to extract the vectors descriptors of the 3D model. For a comparative study of two versions of the descriptor, we used the National Taiwan University (NTU database of 3D object.

  10. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  11. 3D optical manipulation of a single electron spin

    Geiselmann, Michael; Renger, Jan; Say, Jana M; Brown, Louise J; de Abajo, F Javier García; Koppens, Frank; Quidant, Romain

    2013-01-01

    Nitrogen vacancy (NV) centers in diamond are promising elemental blocks for quantum optics [1, 2], spin-based quantum information processing [3, 4], and high-resolution sensing [5-13]. Yet, fully exploiting these capabilities of single NV centers requires strategies to accurately manipulate them. Here, we use optical tweezers as a tool to achieve deterministic trapping and 3D spatial manipulation of individual nano-diamonds hosting a single NV spin. Remarkably, we find the NV axis is nearly fixed inside the trap and can be controlled in-situ, by adjusting the polarization of the trapping light. By combining this unique spatial and angular control with coherent manipulation of the NV spin and fluorescent lifetime measurements near an integrated photonic system, we prove optically trapped NV center as a novel route for both 3D vectorial magnetometry and sensing of the local density of optical states.

  12. Preliminary examples of 3D vector flow imaging

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental ult...

  13. 3D quantitative phase imaging of neural networks using WDT

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  14. SU-E-T-294: Simulations to Investigate the Feasibility of ‘dry’ Optical-CT Imaging for 3D Dosimetry

    Chisholm, K [Duke University, Durham, NC (United States); Rankine, L [Washington University, Saint Louis, MO (United States); Oldham, M [Duke University Medical Center, Durham, NC (United States)

    2014-06-01

    Purpose: To perform simulations investigating the feasibility of “dry” optical-CT, and determine optimal design and scanning parameters for a novel dry tank telecentric optical-CT 3D dosimetry system. Such a system would have important advantages in terms of practical convenience and reduced cost. Methods: A Matlab based ray-tracing simulation platform, ScanSim, was used to model a telecentric system with a polyurethane dry tank, cylindrical dosimeter, and surrounding fluid. This program's capabilities were expanded for the geometry and physics of dry scanning. To categorize the effects of refractive index (RI) mismatches, simulations were run for several dosimeter (RI = 1.5−1.48) and fluid (RI = 1.55−1.33) combinations. Additional simulations examined the effect of increasing gap size (1–5mm) between the dosimeter and tank wall, and of changing the telecentric lens tolerance (0.5°−5°). The evaluation metric is the usable radius; the distance from the dosimeter center where the measured and true doses differ by less than 2%. Results: As the tank/dosimeter RI mismatch increases from 0–0.02, the usable radius decreases from 97.6% to 50.2%. The fluid RI for matching is lower than either the tank or dosimeter RI. Changing gap sizes has drastic effects on the usable radius, requiring more closely matched fluid at large gap sizes. Increasing the telecentric tolerance through a range from 0.5°–5.0° improved the usable radius for every combination of media. Conclusion: Dry optical-CT with telecentric lenses is feasible when the dosimeter and tank RIs are closely matched (<0.01 difference), or when data in the periphery is not required. The ScanSim tool proved very useful in situations when the tank and dosimeter have slight differences in RI by enabling estimation of the optimal choice of RI of the small amount of fluid still required. Some spoiling of the telecentric beam and increasing the tolerance helps recover the usable radius.

  15. Focusing optics of a parallel beam CCD optical tomography apparatus for 3D radiation gel dosimetry.

    Krstajić, Nikola; Doran, Simon J

    2006-04-21

    Optical tomography of gel dosimeters is a promising and cost-effective avenue for quality control of radiotherapy treatments such as intensity-modulated radiotherapy (IMRT). Systems based on a laser coupled to a photodiode have so far shown the best results within the context of optical scanning of radiosensitive gels, but are very slow ( approximately 9 min per slice) and poorly suited to measurements that require many slices. Here, we describe a fast, three-dimensional (3D) optical computed tomography (optical-CT) apparatus, based on a broad, collimated beam, obtained from a high power LED and detected by a charged coupled detector (CCD). The main advantages of such a system are (i) an acquisition speed approximately two orders of magnitude higher than a laser-based system when 3D data are required, and (ii) a greater simplicity of design. This paper advances our previous work by introducing a new design of focusing optics, which take information from a suitably positioned focal plane and project an image onto the CCD. An analysis of the ray optics is presented, which explains the roles of telecentricity, focusing, acceptance angle and depth-of-field (DOF) in the formation of projections. A discussion of the approximation involved in measuring the line integrals required for filtered backprojection reconstruction is given. Experimental results demonstrate (i) the effect on projections of changing the position of the focal plane of the apparatus, (ii) how to measure the acceptance angle of the optics, and (iii) the ability of the new scanner to image both absorbing and scattering gel phantoms. The quality of reconstructed images is very promising and suggests that the new apparatus may be useful in a clinical setting for fast and accurate 3D dosimetry.

  16. Image based 3D city modeling : Comparative study

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  17. A colour image reproduction framework for 3D colour printing

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  18. 3D manipulation with a scanning near field optical nanotweezers

    Berthelot, J; Juan, M L; Kreuzer, M P; Renger, J; Quidant, R

    2013-01-01

    Recent advances in Nanotechnologies have prompted the need for tools to accurately and non-invasively manipulate individual nano-objects. Among possible strategies, optical forces have been foreseen to provide researchers with nano-optical tweezers capable to trap a specimen and move it in 3D. In practice though, the combination of weak optical forces involved and photothermal issues have thus far prevented their experimental realization. Here, we demonstrate first 3D optical manipulation of single 50 nm dielectric objects with near field nano-tweezers. The nano-optical trap is built by engineering a bowtie plasmonic aperture at the extremity of a tapered metal-coated optical fiber. Both the trapping operation and monitoring are performed through the optical fiber making these nano-tweezers totally autonomous and free of bulky optical elements. The achieved trapping performances allow for the trapped specimen to be moved over tens of micrometers during several minutes with very low in-trap intensities. This n...

  19. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  20. Imaging fault zones using 3D seismic image processing techniques

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  1. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  2. DCT and DST Based Image Compression for 3D Reconstruction

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  3. Changes in quantitative 3D shape features of the optic nerve head associated with age

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p glaucoma, disease progression and outcomes, and genetic factors.

  4. Implementation of 3D Optical Scanning Technology for Automotive Applications

    Abdil Kuş

    2009-03-01

    Full Text Available Reverse engineering (RE is a powerful tool for generating a CAD model from the 3D scan data of a physical part that lacks documentation or has changed from the original CAD design of the part. The process of digitizing a part and creating a CAD model from 3D scan data is less time consuming and provides greater accuracy than manually measuring the part and designing the part from scratch in CAD. 3D optical scanning technology is one of the measurement methods which have evolved over the last few years and it is used in a wide range of areas from industrial applications to art and cultural heritage. It is also used extensively in the automotive industry for applications such as part inspections, scanning of tools without CAD definition, scanning the casting for definition of the stock (i.e. the amount of material to be removed from the surface of the castings model for CAM programs and reverse engineering. In this study two scanning experiments of automotive applications are illustrated. The first one examines the processes from scanning to re-manufacturing the damaged sheet metal cutting die, using a 3D scanning technique and the second study compares the scanned point clouds data to 3D CAD data for inspection purposes. Furthermore, the deviations of the part holes are determined by using different lenses and scanning parameters.

  5. Determining 3D flow fields via multi-camera light field imaging.

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-03-06

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

  6. Open-source 3D-printable optics equipment.

    Chenlong Zhang

    Full Text Available Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  7. Open-source 3D-printable optics equipment.

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  8. A 3D surface imaging system for assessing human obesity

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  9. 3D Medical Image Segmentation Based on Rough Set Theory

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  10. 3D Images of Materials Structures Processing and Analysis

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  11. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  12. A Texture Analysis of 3D Radar Images

    Deiana, D.; Yarovoy, A.

    2009-01-01

    In this paper a texture feature coding method to be applied to high-resolution 3D radar images in order to improve target detection is developed. An automatic method for image segmentation based on texture features is proposed. The method has been able to automatically detect weak targets which fail

  13. 3-D Imaging Systems for Agricultural Applications—A Review

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  14. 3-D Imaging Systems for Agricultural Applications—A Review

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  15. 3-D Imaging Systems for Agricultural Applications-A Review.

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  16. Interactive visualization of multiresolution image stacks in 3D.

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  17. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    3-D blood flow quantification with high spatial and temporal resolution would strongly benefit clinical research on cardiovascular pathologies. Ultrasonic velocity techniques are known for their ability to measure blood flow with high precision at high spatial and temporal resolution. However......, current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI......) technique is extended to estimate the 3-D velocity components inside a volume at high temporal resolutions (

  18. Hybrid wide-field and scanning microscopy for high-speed 3D imaging.

    Duan, Yubo; Chen, Nanguang

    2015-11-15

    Wide-field optical microscopy is efficient and robust in biological imaging, but it lacks depth sectioning. In contrast, scanning microscopic techniques, such as confocal microscopy and multiphoton microscopy, have been successfully used for three-dimensional (3D) imaging with optical sectioning capability. However, these microscopic techniques are not very suitable for dynamic real-time imaging because they usually take a long time for temporal and spatial scanning. Here, a hybrid imaging technique combining wide-field microscopy and scanning microscopy is proposed to accelerate the image acquisition process while maintaining the 3D optical sectioning capability. The performance was demonstrated by proof-of-concept imaging experiments with fluorescent beads and zebrafish liver.

  19. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  20. Non-invasive single-shot 3D imaging through a scattering layer using speckle interferometry

    Somkuwar, Atul S; R., Vinu; Park, Yongkeun; Singh, Rakesh Kumar

    2015-01-01

    Optical imaging through complex scattering media is one of the major technical challenges with important applications in many research fields, ranging from biomedical imaging, astronomical telescopy, and spatially multiplex optical communications. Although various approaches for imaging though turbid layer have been recently proposed, they had been limited to two-dimensional imaging. Here we propose and experimentally demonstrate an approach for three-dimensional single-shot imaging of objects hidden behind an opaque scattering layer. We demonstrate that under suitable conditions, it is possible to perform the 3D imaging to reconstruct the complex amplitude of objects situated at different depths.

  1. Innovations in 3D printing: a 3D overview from optics to organs.

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints.

  2. 3D- VISUALIZATION BY RAYTRACING IMAGE SYNTHESIS ON GPU

    Al-Oraiqat Anas M.

    2016-06-01

    Full Text Available This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU. The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 % is spent for transfer of data between the central processing unit and GPU for calculations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of calculations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization.

  3. Optical monitoring of scoliosis by 3D medical laser scanner

    Rodríguez-Quiñonez, Julio C.; Sergiyenko, Oleg Yu.; Preciado, Luis C. Basaca; Tyrsa, Vera V.; Gurko, Alexander G.; Podrygalo, Mikhail A.; Lopez, Moises Rivas; Balbuena, Daniel Hernandez

    2014-03-01

    Three dimensional recording of the human body surface or anatomical areas have gained importance in many medical applications. In this paper, our 3D Medical Laser Scanner is presented. It is based on the novel principle of dynamic triangulation. We analyze the method of operation, medical applications, orthopedically diseases as Scoliosis and the most common types of skin to employ the system the most proper way. It is analyzed a group of medical problems related to the application of optical scanning in optimal way. Finally, experiments are conducted to verify the performance of the proposed system and its method uncertainty.

  4. 3D OPTICAL AND IR SPECTROSCOPY OF EXCEPTIONAL HII GALAXIES

    E. Telles

    2009-01-01

    Full Text Available In this contribution I will very brie y summarize some recent results obtained applying 3D spectroscopy to observations of the well known HII galaxy II Zw 40, both in the optical and near-IR. I have studied the distribution of the dust in the starburst region, the velocity and velocity dispersion, and the geometry of the molecular hydrogen and ionized gas. I found a clear correlation between the component of the ISM and the velocity eld suggesting that the latter has a fundamental role in de ning the modes of the star formation process.

  5. Automated curved planar reformation of 3D spine images

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  6. A physical model eye with 3D resolution test targets for optical coherence tomography

    Hu, Zhixiong; Liu, Wenli; Hong, Baoyu; Hao, Bingtao; Wang, Lele; Li, Jiao

    2014-09-01

    Optical coherence tomography (OCT) has been widely employed as non-invasive 3D imaging diagnostic instrument, particularly in the field of ophthalmology. Although OCT has been approved for use in clinic in USA, Europe and Asia, international standardization of this technology is still in progress. Validation of OCT imaging capabilities is considered extremely important to ensure its effective use in clinical diagnoses. Phantom with appropriate test targets can assist evaluate and calibrate imaging performance of OCT at both installation and throughout lifetime of the instrument. In this paper, we design and fabricate a physical model eye with 3D resolution test targets to characterize OCT imaging performance. The model eye was fabricated with transparent resin to simulate realistic ophthalmic testing environment, and most key optical elements including cornea, lens and vitreous body were realized. The test targets which mimic USAF 1951 test chart were fabricated on the fundus of the model eye by 3D printing technology. Differing from traditional two dimensional USAF 1951 test chart, a group of patterns which have different thickness in depth were fabricated. By measuring the 3D test targets, axial resolution as well as lateral resolution of an OCT system can be evaluated at the same time with this model eye. To investigate this specialized model eye, it was measured by a scientific spectral domain OCT instrument and a clinical OCT system respectively. The results demonstrate that the model eye with 3D resolution test targets have the potential of qualitatively and quantitatively validating the performance of OCT systems.

  7. Autonomous Planetary 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  8. Integration of real-time 3D image acquisition and multiview 3D display

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  9. Extracting 3D Layout From a Single Image Using Global Image Structures

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  10. 3D refractive index measurements of special optical fibers

    Yan, Cheng; Huang, Su-Juan; Miao, Zhuang; Chang, Zheng; Zeng, Jun-Zhang; Wang, Ting-Yun

    2016-09-01

    A digital holographic microscopic chromatography-based approach with considerably improved accuracy, simplified configuration and performance stability is proposed to measure three dimensional refractive index of special optical fibers. Based on the approach, a measurement system is established incorporating a modified Mach-Zehnder interferometer and lab-developed supporting software for data processing. In the system, a phase projection distribution of an optical fiber is utilized to obtain an optimal digital hologram recorded by a CCD, and then an angular spectrum theory-based algorithm is adopted to extract the phase distribution information of an object wave. The rotation of the optic fiber enables the experimental measurements of multi-angle phase information. Based on the filtered back projection algorithm, a 3D refraction index of the optical fiber is thus obtained at high accuracy. To evaluate the proposed approach, both PANDA fibers and special elliptical optical fiber are considered in the system. The results measured in PANDA fibers agree well with those measured using S14 Refractive Index Profiler, which is, however, not suitable for measuring the property of a special elliptical fiber.

  11. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  12. Deformable Surface 3D Reconstruction from Monocular Images

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  13. 3D CARS image reconstruction and pattern recognition on SHG images

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

  14. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  15. Vhrs Stereo Images for 3d Modelling of Buildings

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  16. VHRS STEREO IMAGES FOR 3D MODELLING OF BUILDINGS

    A. Bujakiewicz

    2012-07-01

    Full Text Available The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation – Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control pointsand amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  17. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  18. Interactive 2D to 3D stereoscopic image synthesis

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  19. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  20. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  1. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  2. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  3. Image Appraisal for 2D and 3D Electromagnetic Inversion

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  4. Optimal Point Spread Function Design for 3D Imaging

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2015-01-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and super-resolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem – finding the pupil-plane phase pattern that would yield a PSF with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 μm depth of field, and another with an unprecedented 5 μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  5. 3D reconstruction of concave surfaces using polarisation imaging

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  6. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  7. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenges...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....

  8. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  9. Discrete Method of Images for 3D Radio Propagation Modeling

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  10. 3D reconstruction of multiple stained histology images

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  11. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  12. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  13. Quantitative 3D imaging of whole, unstained cells by using X-ray diffraction microscopy.

    Jiang, Huaidong; Song, Changyong; Chen, Chien-Chun; Xu, Rui; Raines, Kevin S; Fahimian, Benjamin P; Lu, Chien-Hung; Lee, Ting-Kuo; Nakashima, Akio; Urano, Jun; Ishikawa, Tetsuya; Tamanoi, Fuyuhiko; Miao, Jianwei

    2010-06-22

    Microscopy has greatly advanced our understanding of biology. Although significant progress has recently been made in optical microscopy to break the diffraction-limit barrier, reliance of such techniques on fluorescent labeling technologies prohibits quantitative 3D imaging of the entire contents of cells. Cryoelectron microscopy can image pleomorphic structures at a resolution of 3-5 nm, but is only applicable to thin or sectioned specimens. Here, we report quantitative 3D imaging of a whole, unstained cell at a resolution of 50-60 nm by X-ray diffraction microscopy. We identified the 3D morphology and structure of cellular organelles including cell wall, vacuole, endoplasmic reticulum, mitochondria, granules, nucleus, and nucleolus inside a yeast spore cell. Furthermore, we observed a 3D structure protruding from the reconstructed yeast spore, suggesting the spore germination process. Using cryogenic technologies, a 3D resolution of 5-10 nm should be achievable by X-ray diffraction microscopy. This work hence paves a way for quantitative 3D imaging of a wide range of biological specimens at nanometer-scale resolutions that are too thick for electron microscopy.

  14. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  15. Feature detection on 3D images of dental imprints

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  16. Automatic 3-D Optical Detection on Orientation of Randomly Oriented Industrial Parts for Rapid Robotic Manipulation

    Liang-Chia Chen

    2012-12-01

    Full Text Available This paper proposes a novel method employing a developed 3-D optical imaging and processing algorithm for accurate classification of an object’s surface characteristics in robot pick and place manipulation. In the method, 3-D geometry of industrial parts can be rapidly acquired by the developed one-shot imaging optical probe based on Fourier Transform Profilometry (FTP by using digital-fringe projection at a camera’s maximum sensing speed. Following this, the acquired range image can be effectively segmented into three surface types by classifying point clouds based on the statistical distribution of the normal surface vector of each detected 3-D point, and then the scene ground is reconstructed by applying least squares fitting and classification algorithms. Also, a recursive search process incorporating the region-growing algorithm for registering homogeneous surface regions has been developed. When the detected parts are randomly overlapped on a workbench, a group of defined 3-D surface features, such as surface areas, statistical values of the surface normal distribution and geometric distances of defined features, can be uniquely recognized for detection of the part’s orientation. Experimental testing was performed to validate the feasibility of the developed method for real robotic manipulation.

  17. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  18. 3D Lunar Terrain Reconstruction from Apollo Images

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  19. UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING

    I. Sarakinou

    2016-06-01

    Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  20. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  1. Low cost 3D scanning process using digital image processing

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  2. Effective classification of 3D image data using partitioning methods

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  3. Physically based analysis of deformations in 3D images

    Nastar, Chahab; Ayache, Nicholas

    1993-06-01

    We present a physically based deformable model which can be used to track and to analyze the non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the natural lengths of the springs is set equal to zero, and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, whatever the amplitude of the deformation. It provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track, and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images.

  4. Image-Based 3D Face Modeling System

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  5. Extracting 3D layout from a single image using global image structures.

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  6. Mapping 3D fiber orientation in tissue using dual-angle optical polarization tractography.

    Wang, Y; Ravanfar, M; Zhang, K; Duan, D; Yao, G

    2016-10-01

    Optical polarization tractography (OPT) has recently been applied to map fiber organization in the heart, skeletal muscle, and arterial vessel wall with high resolution. The fiber orientation measured in OPT represents the 2D projected fiber angle in a plane that is perpendicular to the incident light. We report here a dual-angle extension of the OPT technology to measure the actual 3D fiber orientation in tissue. This method was first verified by imaging the murine extensor digitorum muscle placed at various known orientations in space. The accuracy of the method was further studied by analyzing the 3D fiber orientation of the mouse tibialis anterior muscle. Finally we showed that dual-angle OPT successfully revealed the unique 3D "arcade" fiber structure in the bovine articular cartilage.

  7. Cordless hand-held optical 3D sensor

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  8. A Jones matrix formalism for simulating 3D Polarised Light Imaging of brain tissue

    Menzel, Miriam; De Raedt, Hans; Reckfort, Julia; Amunts, Katrin; Axer, Markus

    2015-01-01

    The neuroimaging technique 3D Polarised Light Imaging (3D-PLI) provides a high-resolution reconstruction of nerve fibres in human post-mortem brains. The orientations of the fibres are derived from birefringence measurements of histological brain sections assuming that the nerve fibres - consisting of an axon and a surrounding myelin sheath - are uniaxial birefringent and that the measured optic axis is oriented in direction of the nerve fibres (macroscopic model). Although experimental studies support this assumption, the molecular structure of the myelin sheath suggests that the birefringence of a nerve fibre can be described more precisely by multiple optic axes oriented radially around the fibre axis (microscopic model). In this paper, we compare the use of the macroscopic and the microscopic model for simulating 3D-PLI by means of the Jones matrix formalism. The simulations show that the macroscopic model ensures a reliable estimation of the fibre orientations as long as the polarimeter does not resolve ...

  9. Experiments on terahertz 3D scanning microscopic imaging

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  10. Fast, high-resolution 3D dosimetry utilizing a novel optical-CT scanner incorporating tertiary telecentric collimation

    Sakhalkar, H. S.; Oldham, M

    2008-01-01

    This study introduces a charge coupled device (CCD) area detector based optical-computed tomography (optical-CT) scanner for comprehensive verification of radiation dose distributions recorded in nonscattering radiochromic dosimeters. Defining characteristics include: (i) a very fast scanning time of ~5 min to acquire a complete three-dimensional (3D) dataset, (ii) improved image formation through the use of custom telecentric optics, which ensures accurate projection images and minimizes art...

  11. 3D printing of tissue-simulating phantoms for calibration of biomedical optical devices

    Zhao, Zuhua; Zhou, Ximing; Shen, Shuwei; Liu, Guangli; Yuan, Li; Meng, Yuquan; Lv, Xiang; Shao, Pengfei; Dong, Erbao; Xu, Ronald X.

    2016-10-01

    Clinical utility of many biomedical optical devices is limited by the lack of effective and traceable calibration methods. Optical phantoms that simulate biological tissues used for optical device calibration have been explored. However, these phantoms can hardly simulate both structural and optical properties of multi-layered biological tissue. To address this limitation, we develop a 3D printing production line that integrates spin coating, light-cured 3D printing and Fused Deposition Modeling (FDM) for freeform fabrication of optical phantoms with mechanical and optical heterogeneities. With the gel wax Polydimethylsiloxane (PDMS), and colorless light-curable ink as matrix materials, titanium dioxide (TiO2) powder as the scattering ingredient, graphite powder and black carbon as the absorption ingredient, a multilayer phantom with high-precision is fabricated. The absorption and scattering coefficients of each layer are measured by a double integrating sphere system. The results demonstrate that the system has the potential to fabricate reliable tissue-simulating phantoms to calibrate optical imaging devices.

  12. Configurable Input Devices for 3D Interaction using Optical Tracking

    Rhijn, A.J. van

    2007-01-01

    Three-dimensional interaction with virtual objects is one of the aspects that needs to be addressed in order to increase the usability and usefulness of virtual reality. Human beings have difficulties understanding 3D spatial relationships and manipulating 3D user interfaces, which require the contr

  13. High Resolution 3D Radar Imaging of Comet Interiors

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  14. 3D Image Sensor based on Parallax Motion

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  15. Block matching 3D random noise filtering for absorption optical projection tomography

    Fumene Feruglio, P; Vinegoni, C; Weissleder, R [Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, 185 Cambridge Street, Boston, MA 02114 (United States); Gros, J [Department of Genetics, Harvard Medical School, 77 Avenue Louis Pasteur, Boston MA 02115 (United States); Sbarbati, A, E-mail: cvinegoni@mgh.harvard.ed [Department of Morphological and Biomedical Sciences, University of Verona, Strada Le Grazie 8, 37134 Verona (Italy)

    2010-09-21

    Absorption and emission optical projection tomography (OPT), alternatively referred to as optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT), are recently developed three-dimensional imaging techniques with value for developmental biology and ex vivo gene expression studies. The techniques' principles are similar to the ones used for x-ray computed tomography and are based on the approximation of negligible light scattering in optically cleared samples. The optical clearing is achieved by a chemical procedure which aims at substituting the cellular fluids within the sample with a cell membranes' index matching solution. Once cleared the sample presents very low scattering and is then illuminated with a light collimated beam whose intensity is captured in transillumination mode by a CCD camera. Different projection images of the sample are subsequently obtained over a 360{sup 0} full rotation, and a standard backprojection algorithm can be used in a similar fashion as for x-ray tomography in order to obtain absorption maps. Because not all biological samples present significant absorption contrast, it is not always possible to obtain projections with a good signal-to-noise ratio, a condition necessary to achieve high-quality tomographic reconstructions. Such is the case for example, for early stage's embryos. In this work we demonstrate how, through the use of a random noise removal algorithm, the image quality of the reconstructions can be considerably improved even when the noise is strongly present in the acquired projections. Specifically, we implemented a block matching 3D (BM3D) filter applying it separately on each acquired transillumination projection before performing a complete three-dimensional tomographical reconstruction. To test the efficiency of the adopted filtering scheme, a phantom and a real biological sample were processed. In both cases, the BM3D filter led to a signal-to-noise ratio

  16. 2D-3D image registration in diagnostic and interventional X-Ray imaging

    Bom, I.M.J. van der

    2010-01-01

    Clinical procedures that are conventionally guided by 2D x-ray imaging, may benefit from the additional spatial information provided by 3D image data. For instance, guidance of minimally invasive procedures with CT or MRI data provides 3D spatial information and visualization of structures that are

  17. Intensity-based image registration for 3D spatial compounding using a freehand 3D ultrasound system

    Pagoulatos, Niko; Haynor, David R.; Kim, Yongmin

    2002-04-01

    3D spatial compounding involves the combination of two or more 3D ultrasound (US) data sets, acquired under different insonation angles and windows, to form a higher quality 3D US data set. An important requirement for this method to succeed is the accurate registration between the US images used to form the final compounded image. We have developed a new automatic method for rigid and deformable registration of 3D US data sets, acquired using a freehand 3D US system. Deformation is provided by using a 3D thin-plate spline (TPS). Our method is fundamentally different from the previous ones in that the acquired scattered US 2D slices are registered and compounded directly into the 3D US volume. Our approach has several benefits over the traditional registration and spatial compounding methods: (i) we only peform one 3D US reconstruction, for the first acquired data set, therefore we save the computation time required to reconstruct subsequent acquired scans, (ii) for our registration we use (except for the first scan) the acquired high-resolution 2D US images rather than the 3D US reconstruction data which are of lower quality due to the interpolation and potential subsampling associated with 3D reconstruction, and (iii) the scans performed after the first one are not required to follow the typical 3D US scanning protocol, where a large number of dense slices have to be acquired; slices can be acquired in any fashion in areas where compounding is desired. We show that by taking advantage of the similar information contained in adjacent acquired 2D US slices, we can reduce the computation time of linear and nonlinear registrations by a factor of more than 7:1, without compromising registration accuracy. Furthermore, we implemented an adaptive approximation to the 3D TPS with local bilinear transformations allowing additional reduction of the nonlinear registration computation time by a factor of approximately 3.5. Our results are based on a commercially available

  18. Research of Fast 3D Imaging Based on Multiple Mode

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  19. 3-D Image Analysis of Fluorescent Drug Binding

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  20. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  1. Optical 3D shape measurement for dynamic process

    2008-01-01

    3D shape dynamic measurement is essential to the study of machine vision, hydromechanics, high-speed rotation, deformation of material, stress analysis, deformation in impact, explosion process and biomedicine. in recent years. In this paper,the results of our research, including the theoretical analysis, some feasible methods and relevant verifying experiment results, are compendiously reported. At present, these results have been used in our assembling instruments for 3D shape measurement of dynamic process.

  2. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  3. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  4. Needle placement for piriformis injection using 3-D imaging.

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  5. Terahertz imaging system based on bessel beams via 3D printed axicons at 100GHz

    Liu, Changming; Wei, Xuli; Zhang, Zhongqi; Wang, Kejia; Yang, Zhenggang; Liu, Jinsong

    2014-11-01

    Terahertz (THz) imaging technology shows great advantage in nondestructive detection (NDT), since many optical opaque materials are transparent to THz waves. In this paper, we design and fabricate dielectric axicons to generate zeroth order-Bessel beams by 3D printing technology. We further present an all-electric THz imaging system using the generated Bessel beams in 100GHz. Resolution targets made of printed circuit board are imaged, and the results clearly show the extended depth of focus of Bessel beam, indicating the promise of Bessel beam for the THz NDT.

  6. Spectral ladar: towards active 3D multispectral imaging

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  7. GPU-accelerated denoising of 3D magnetic resonance images

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  8. Air-structured optical fibre drawn from a 3D-printed preform

    Cook, Kevin; Leon-Saval, Sergio; Reid, Zane; Hossain, Md Arafat; Comatti, Jade-Edouard; Luo, Yanhua; Peng, Gang-Ding

    2016-01-01

    A structured optical fibre is drawn from a 3D-printed structured preform. Preforms containing a single ring of holes around the core are fabricated using filament made from a modified butadiene polymer. More broadly, 3D printers capable of processing soft glasses, silica and other materials are likely to come on line in the not-so distant future. 3D printing of optical preforms signals a new milestone in optical fibre manufacture.

  9. Slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow.

    Jagannadh, Veerendra Kalyan; Mackenzie, Mark D; Pal, Parama; Kar, Ajoy K; Gorthi, Sai Siva

    2016-09-19

    Three-dimensional cellular imaging techniques have become indispensable tools in biological research and medical diagnostics. Conventional 3D imaging approaches employ focal stack collection to image different planes of the cell. In this work, we present the design and fabrication of a slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow. The approach employs slanted microfluidic channels fabricated in glass using ultrafast laser inscription. The slanted nature of the microfluidic channels ensures that samples come into and go out of focus, as they pass through the microscope imaging field of view. This novel approach enables the collection of focal stacks in a straight-forward and automated manner, even with off-the-shelf microscopes that are not equipped with any motorized translation/rotation sample stages. The presented approach not only simplifies conventional focal stack collection, but also enhances the capabilities of a regular widefield fluorescence microscope to match the features of a sophisticated confocal microscope. We demonstrate the retrieval of sectioned slices of microspheres and cells, with the use of computational algorithms to enhance the signal-to-noise ratio (SNR) in the collected raw images. The retrieved sectioned images have been used to visualize fluorescent microspheres and bovine sperm cell nucleus in 3D while using a regular widefield fluorescence microscope. We have been able to achieve sectioning of approximately 200 slices per cell, which corresponds to a spatial translation of ∼ 15 nm per slice along the optical axis of the microscope.

  10. Extended volume and surface scatterometer for optical characterization of 3D-printed elements

    Dannenberg, Florian; Uebeler, Denise; Weiß, Jürgen; Pescoller, Lukas; Weyer, Cornelia; Hahlweg, Cornelius

    2015-09-01

    The use of 3d printing technology seems to be a promising way for low cost prototyping, not only of mechanical, but also of optical components or systems. It is especially useful in applications where customized equipment repeatedly is subject to immediate destruction, as in experimental detonics and the like. Due to the nature of the 3D-printing process, there is a certain inner texture and therefore inhomogeneous optical behaviour to be taken into account, which also indicates mechanical anisotropy. Recent investigations are dedicated to quantification of optical properties of such printed bodies and derivation of corresponding optimization strategies for the printing process. Beside mounting, alignment and illumination means, also refractive and reflective elements are subject to investigation. The proposed measurement methods are based on an imaging nearfield scatterometer for combined volume and surface scatter measurements as proposed in previous papers. In continuation of last year's paper on the use of near field imaging, which basically is a reflective shadowgraph method, for characterization of glossy surfaces like printed matter or laminated material, further developments are discussed. The device has been extended for observation of photoelasticity effects and therefore homogeneity of polarization behaviour. A refined experimental set-up is introduced. Variation of plane of focus and incident angle are used for separation of various the images of the layers of the surface under test, cross and parallel polarization techniques are applied. Practical examples from current research studies are included.

  11. High resolution 3D imaging of synchrotron generated microbeams

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  12. Flexible Holographic Fabrication of 3D Photonic Crystal Templates with Polarization Control through a 3D Printed Reflective Optical Element

    David Lowell

    2016-07-01

    Full Text Available In this paper, we have systematically studied the holographic fabrication of three-dimensional (3D structures using a single 3D printed reflective optical element (ROE, taking advantage of the ease of design and 3D printing of the ROE. The reflective surface was setup at non-Brewster angles to reflect both s- and p-polarized beams for the interference. The wide selection of reflective surface materials and interference angles allow control of the ratio of s- and p-polarizations, and intensity ratio of side-beam to central beam for interference lithography. Photonic bandgap simulations have also indicated that both s and p-polarized waves are sometimes needed in the reflected side beams for maximum photonic bandgap size and certain filling fractions of dielectric inside the photonic crystals. The flexibility of single ROE and single exposure based holographic fabrication of 3D structures was demonstrated with reflective surfaces of ROEs at non-Brewster angles, highlighting the capability of the ROE technique of producing umbrella configurations of side beams with arbitrary angles and polarizations and paving the way for the rapid throughput of various photonic crystal templates.

  13. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-07-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.

  14. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  15. Design of 3D isotropic metamaterial device using smart transformation optics.

    Shin, Dongheok; Kim, Junhyun; Yoo, Do-Sik; Kim, Kyoungsik

    2015-08-24

    We report here a design method for a 3 dimensional (3D) isotropic transformation optical device using smart transformation optics. Inspired by solid mechanics, smart transformation optics regards a transformation optical medium as an elastic solid and deformations as coordinate transformations. Further developing from our previous work on 2D smart transformation optics, we introduce a method of 3D smart transformation optics to design 3D transformation optical devices by maintaining isotropic materials properties for all types of polarizations imposing free or nearly free boundary conditions. Due to the material isotropy, it is possible to fabricate such devices with structural metamaterials made purely of common dielectric materials. In conclusion, the practical importance of the method reported here lies in the fact that it enables us to fabricate, without difficulty, arbitrarily shaped 3D devices with existing 3D printing technology.

  16. 3D Seismic Imaging over a Potential Collapse Structure

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  17. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  18. Multiframe image point matching and 3-d surface reconstruction.

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  19. Generic camera model and its calibration for computational integral imaging and 3D reconstruction.

    Li, Weiming; Li, Youfu

    2011-03-01

    Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

  20. Ultra wide band millimeter wave holographic ``3-D`` imaging of concealed targets on mannequins

    Collins, H.D.; Hall, T.E.; Gribble, R.P. [Pacific Northwest Lab., Richland, WA (United States). Acoustics & Electromagnetic Imaging Group

    1994-08-01

    Ultra wide band (chirp frequency) millimeter wave ``3-D`` holography is a unique technique for imaging concealed targets on human subjects with extremely high lateral and depth resolution. Recent ``3-D`` holographic images of full size mannequins with concealed weapons illustrate the efficacy of this technique for airport security. A chirp frequency (24 GHz to 40 GHz) holographic system was used to construct extremely high resolution images (optical quality) using polyrod antenna in a bi-static configuration using an x-y scanner. Millimeter wave chirp frequency holography can be simply described as a multi-frequency detection and imaging technique where the target`s reflected signals are decomposed into discrete frequency holograms and reconstructed into a single composite ``3-D`` image. The implementation of this technology for security at airports, government installations, etc., will require real-time (video rate) data acquisition and computer image reconstruction of large volumetric data sets. This implies rapid scanning techniques or large, complex ``2-D`` arrays and high speed computing for successful commercialization of this technology.

  1. A 3-D fluorescence imaging system incorporating structured illumination technology

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  2. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  3. 3D super-resolution imaging by localization microscopy.

    Magenau, Astrid; Gaus, Katharina

    2015-01-01

    Fluorescence microscopy is an important tool in all fields of biology to visualize structures and monitor dynamic processes and distributions. Contrary to conventional microscopy techniques such as confocal microscopy, which are limited by their spatial resolution, super-resolution techniques such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) have made it possible to observe and quantify structure and processes on the single molecule level. Here, we describe a method to image and quantify the molecular distribution of membrane-associated proteins in two and three dimensions with nanometer resolution.

  4. Optical lens-shift design for increasing spatial resolution of 3D ToF cameras

    Lietz, Henrik; Hassan, M. Muneeb; Eberhardt, Jörg

    2017-02-01

    Sensor resolution of 3D time-of-flight (ToF) outdoor-capable cameras is strongly limited because of its large pixel dimensions. Computational imaging permits enhancement of the optical system's resolving power without changing physical sensor properties. Super-resolution (SR) algorithms superimpose several sub-pixel-shifted low-resolution (LR) images to overcome the system's limited spatial sampling rate. In this paper, we propose a novel opto-mechanical system to implement sub-pixel shifts by moving an optical lens. This method is more flexible in terms of implementing SR techniques than current sensor-shift approaches. In addition, we describe a SR observation model that has been optimized for the use of LR 3D ToF cameras. A state-of-the-art iteratively reweighted minimization algorithm executes the SR process. It is proven that our method achieves nearly the same resolution increase as if the pixel area would be halved physically. Resolution enhancement is measured objectively for amplitude images of a static object scene.

  5. Parallel Imaging of 3D Surface Profile with Space-Division Multiplexing

    Hyung Seok Lee

    2016-01-01

    Full Text Available We have developed a modified optical frequency domain imaging (OFDI system that performs parallel imaging of three-dimensional (3D surface profiles by using the space division multiplexing (SDM method with dual-area swept sourced beams. We have also demonstrated that 3D surface information for two different areas could be well obtained in a same time with only one camera by our method. In this study, double field of views (FOVs of 11.16 mm × 5.92 mm were achieved within 0.5 s. Height range for each FOV was 460 µm and axial and transverse resolutions were 3.6 and 5.52 µm, respectively.

  6. 3D mapping from high resolution satellite images

    Goulas, D.; Georgopoulos, A.; Sarakenos, A.; Paraschou, Ch.

    2013-08-01

    In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.

  7. Automated 3D renal segmentation based on image partitioning

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  8. 3D spectral imaging system for anterior chamber metrology

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  9. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  10. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map.

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D; Sonka, Milan

    2013-12-01

    Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 μm for the 2D and 3D methods, respectively.

  11. An Optically-Assisted 3-D Cellular Array Machine

    1993-11-05

    Presented by: Physical Optics Corporation 0 Research & Development Division 20600 Gramercy Place, Suite 103 Torrance, California 90501 Principal...Computer Machine (Constructed Hardware) (Planned Hardware Design) Processing Techniques Digital Only Digital and Analog Analog Processor N/A Celular Neural

  12. Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera

    López-Alba, E.; Felipe-Sesé, L.; Schmeer, S.; Díaz, F. A.

    2016-11-01

    In the current paper, an optical low-cost system for 3D displacement measurement based on a single camera and 3D digital image correlation is presented. The conventional 3D-DIC set-up based on a two-synchronized-cameras system is compared with a proposed pseudo-stereo portable system that employs a mirror system integrated in a device for a straightforward application achieving a novel handle and flexible device for its use in many scenarios. The proposed optical system splits the image by the camera into two stereo images of the object. In order to validate this new approach and quantify its uncertainty compared to traditional 3D-DIC systems, solid rigid in and out-of-plane displacements experiments have been performed and analyzed. The differences between both systems have been studied employing an image decomposition technique which performs a full image comparison. Therefore, results of all field of view are compared with those using a stereoscopy system and 3D-DIC, discussing the accurate results obtained with the proposed device not having influence any distortion or aberration produced by the mirrors. Finally, the adaptability of the proposed system and its accuracy has been tested performing quasi-static and dynamic experiments using a silicon specimen under high deformation. Results have been compared and validated with those obtained from a conventional stereoscopy system showing an excellent level of agreement.

  13. 3D multicolor super-resolution imaging offers improved accuracy in neuron tracing.

    Melike Lakadamyali

    Full Text Available The connectivity among neurons holds the key to understanding brain function. Mapping neural connectivity in brain circuits requires imaging techniques with high spatial resolution to facilitate neuron tracing and high molecular specificity to mark different cellular and molecular populations. Here, we tested a three-dimensional (3D, multicolor super-resolution imaging method, stochastic optical reconstruction microscopy (STORM, for tracing neural connectivity using cultured hippocampal neurons obtained from wild-type neonatal rat embryos as a model system. Using a membrane specific labeling approach that improves labeling density compared to cytoplasmic labeling, we imaged neural processes at 44 nm 2D and 116 nm 3D resolution as determined by considering both the localization precision of the fluorescent probes and the Nyquist criterion based on label density. Comparison with confocal images showed that, with the currently achieved resolution, we could distinguish and trace substantially more neuronal processes in the super-resolution images. The accuracy of tracing was further improved by using multicolor super-resolution imaging. The resolution obtained here was largely limited by the label density and not by the localization precision of the fluorescent probes. Therefore, higher image resolution, and thus higher tracing accuracy, can in principle be achieved by further improving the label density.

  14. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  15. Programmable Bidirectional Folding of Metallic Thin Films for 3D Chiral Optical Antennas.

    Mao, Yifei; Zheng, Yun; Li, Can; Guo, Lin; Pan, Yini; Zhu, Rui; Xu, Jun; Zhang, Weihua; Wu, Wengang

    2017-03-10

    3D structures with characteristic lengths ranging from nanometer to micrometer scale often exhibit extraordinary optical properties, and have been becoming an extensively explored field for building new generation nanophotonic devices. Albeit a few methods have been developed for fabricating 3D optical structures, constructing 3D structures with nanometer accuracy, diversified materials, and perfect morphology is an extremely challenging task. This study presents a general 3D nanofabrication technique, the focused ion beam stress induced deformation process, which allows a programmable and accurate bidirectional folding (-70°-+90°) of various metal and dielectric thin films. Using this method, 3D helical optical antennas with different handedness, improved surface smoothness, and tunable geometries are fabricated, and the strong optical rotation effects of single helical antennas are demonstrated.

  16. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  17. Influence of limited random-phase of objects on the image quality of 3D holographic display

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  18. Superimposing of virtual graphics and real image based on 3D CAD information

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  19. PSF Rotation with Changing Defocus and Applications to 3D Imaging for Space Situational Awareness

    Prasad, S.; Kumar, R.

    2013-09-01

    For a clear, well corrected imaging aperture in space, the point-spread function (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

  20. Step-index optical fibre drawn from 3D printed preforms

    CooK, Kevin; Canning, John; Chartier, Loic; Athanaze, Tristan; Hossain, Md Arafat; Han, Chunyang; Comatti, Jade-Edouard; Luo, Yanhua; Peng, Gang-Ding

    2016-01-01

    Optical fibre is drawn from a dual-head 3D printer fabricated preform made of two optically transparent plastics with a high index core (NA ~ 0.25, V > 60). The asymmetry observed in the fibre arises from asymmetry in the 3D printing process. The highly multi-mode optical fibre has losses measured by cut-back as low as {\\alpha} ~ 0.44 dB/cm in the near IR.

  1. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  2. Optical-CT 3D Dosimetry Using Fresnel Lenses with Minimal Refractive-Index Matching Fluid.

    Steven Bache

    Full Text Available Telecentric optical computed tomography (optical-CT is a state-of-the-art method for visualizing and quantifying 3-dimensional dose distributions in radiochromic dosimeters. In this work a prototype telecentric system (DFOS-Duke Fresnel Optical-CT Scanner is evaluated which incorporates two substantial design changes: the use of Fresnel lenses (reducing lens costs from $10-30K t0 $1-3K and the use of a 'solid tank' (which reduces noise, and the volume of refractively matched fluid from 1 ltr to 10 cc. The efficacy of DFOS was evaluated by direct comparison against commissioned scanners in our lab. Measured dose distributions from all systems were compared against the predicted dose distributions from a commissioned treatment planning system (TPS. Three treatment plans were investigated including a simple four-field box treatment, a multiple small field delivery, and a complex IMRT treatment. Dosimeters were imaged within 2 h post irradiation, using consistent scanning techniques (360 projections acquired at 1 degree intervals, reconstruction at 2mm. DFOS efficacy was evaluated through inspection of dose line-profiles, and 2D and 3D dose and gamma maps. DFOS/TPS gamma pass rates with 3%/3mm dose difference/distance-to-agreement criteria ranged from 89.3% to 92.2%, compared to from 95.6% to 99.0% obtained with the commissioned system. The 3D gamma pass rate between the commissioned system and DFOS was 98.2%. The typical noise rates in DFOS reconstructions were up to 3%, compared to under 2% for the commissioned system. In conclusion, while the introduction of a solid tank proved advantageous with regards to cost and convenience, further work is required to improve the image quality and dose reconstruction accuracy of the new DFOS optical-CT system.

  3. 3-D neurohistology of transparent tongue in health and injury with optical clearing

    Tzu-En eHua

    2013-10-01

    Full Text Available Tongue receives extensive innervation to perform taste, sensory, and motor functions. Details of the tongue neuroanatomy and its plasticity in response to injury offer insights to investigate tongue neurophysiology and pathophysiology. However, due to the dispersed nature of the neural network, standard histology cannot provide a global view of the innervation. We prepared transparent mouse tongue by optical clearing to reveal the spatial features of the tongue innervation and its remodeling in injury. Immunostaining of neuronal markers, including PGP9.5 (pan-neuronal marker, calcitonin gene-related peptide (sensory nerves, tyrosine hydroxylase (sympathetic nerves, and vesicular acetylcholine transporter (cholinergic parasympathetic nerves and neuromuscular junctions, was combined with vessel painting and nuclear staining to label the tissue network and architecture. The tongue specimens were immersed in the optical-clearing solution to facilitate photon penetration for 3-dimensiontal (3-D confocal microscopy. Taking advantage of the transparent tissue, we simultaneously revealed the tongue microstructure and innervation with subcellular-level resolution. 3-D projection of the papillary neurovascular complex and taste bud innervation was used to demonstrate the spatial features of tongue mucosa and the panoramic imaging approach. In the tongue injury induced by 4-nitroquinoline 1-oxide administration in the drinking water, we observed neural tissue remodeling in response to the changes of mucosal and muscular structures. Neural networks and the neuromuscular junctions were both found rearranged at the peri-lesional region, suggesting the nerve-lesion interactions in response to injury. Overall, this new tongue histological approach provides a useful tool for 3-D imaging of neural tissues to better characterize their roles with the mucosal and muscular components in health and disease.

  4. Multimodal Registration and Fusion for 3D Thermal Imaging

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  5. 3D optical phase reconstruction within PMMA samples using a spectral OCT system

    Briones-R., Manuel d. J.; De La Torre-Ibarra, Manuel H.; Mendoza Santoyo, Fernando

    2015-08-01

    The optical coherence tomography (OCT) technique has proved to be a useful method in biomedical areas such as ophthalmology, dentistry, dermatology, among many others. In all these applications the main target is to reconstruct the internal structure of the samples from which the physician's expertise may recognize and diagnose the existence of a disease. Nowadays OCT has been applied one step further and is used to study the mechanics of some particular type of materials, where the resulting information involves more than just their internal structure and the measurement of parameters such as displacements, stress and strain. Here we report on a spectral OCT system used to image the internal 3D microstructure and displacement maps from a PMMA (Poly-methyl-methacrylate) sample, subjected to a deformation by a controlled three point bending and tilting. The internal mechanical response of the polymer is shown as consecutive 2D images.

  6. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  7. Robust Reconstruction and Generalized Dual Hahn Moments Invariants Extraction for 3D Images

    Mesbah, Abderrahim; Zouhri, Amal; El Mallahi, Mostafa; Zenkouar, Khalid; Qjidaa, Hassan

    2017-03-01

    In this paper, we introduce a new set of 3D weighed dual Hahn moments which are orthogonal on a non-uniform lattice and their polynomials are numerically stable to scale, consequent, producing a set of weighted orthonormal polynomials. The dual Hahn is the general case of Tchebichef and Krawtchouk, and the orthogonality of dual Hahn moments eliminates the numerical approximations. The computational aspects and symmetry property of 3D weighed dual Hahn moments are discussed in details. To solve their inability to invariability of large 3D images, which cause to overflow issues, a generalized version of these moments noted 3D generalized weighed dual Hahn moment invariants are presented where whose as linear combination of regular geometric moments. For 3D pattern recognition, a generalized expression of 3D weighted dual Hahn moment invariants, under translation, scaling and rotation transformations, have been proposed where a new set of 3D-GWDHMIs have been provided. In experimental studies, the local and global capability of free and noisy 3D image reconstruction of the 3D-WDHMs has been compared with other orthogonal moments such as 3D Tchebichef and 3D Krawtchouk moments using Princeton Shape Benchmark database. On pattern recognition using the 3D-GWDHMIs like 3D object descriptors, the experimental results confirm that the proposed algorithm is more robust than other orthogonal moments for pattern classification of 3D images with and without noise.

  8. Skeletonization algorithm-based blood vessel quantification using in vivo 3D photoacoustic imaging

    Meiburger, K. M.; Nam, S. Y.; Chung, E.; Suggs, L. J.; Emelianov, S. Y.; Molinari, F.

    2016-11-01

    Blood vessels are the only system to provide nutrients and oxygen to every part of the body. Many diseases can have significant effects on blood vessel formation, so that the vascular network can be a cue to assess malicious tumor and ischemic tissues. Various imaging techniques can visualize blood vessel structure, but their applications are often constrained by either expensive costs, contrast agents, ionizing radiations, or a combination of the above. Photoacoustic imaging combines the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging, and image contrast depends on optical absorption. This enables the detection of light absorbing chromophores such as hemoglobin with a greater penetration depth compared to purely optical techniques. We present here a skeletonization algorithm for vessel architectural analysis using non-invasive photoacoustic 3D images acquired without the administration of any exogenous contrast agents. 3D photoacoustic images were acquired on rats (n  =  4) in two different time points: before and after a burn surgery. A skeletonization technique based on the application of a vesselness filter and medial axis extraction is proposed to extract the vessel structure from the image data and six vascular parameters (number of vascular trees (NT), vascular density (VD), number of branches (NB), 2D distance metric (DM), inflection count metric (ICM), and sum of angles metric (SOAM)) were calculated from the skeleton. The parameters were compared (1) in locations with and without the burn wound on the same day and (2) in the same anatomic location before (control) and after the burn surgery. Four out of the six descriptors were statistically different (VD, NB, DM, ICM, p  <  0.05) when comparing two anatomic locations on the same day and when considering the same anatomic location at two separate times (i.e. before and after burn surgery). The study demonstrates an

  9. DETERMINATION OF INTERNAL STRAIN IN 3-D BRAIDED COMPOSITES USING OPTIC FIBER STRAIN SENSORS

    YuanShenfang; HuangRui; LiXianghua; LiuXiaohui

    2004-01-01

    A reliable understanding of the properties of 3-D braided composites is of primary importance for proper utilization of these materials. A new method is introduced to study the mechanical performance of braided composite materials using embedded optic fiber sensors. Experimental research is performed to devise a method of incorporating optic fibers into a 3-D braided composite structure. The efficacy of this new testing method is evaluated on two counts. First,the optical performance of optic fibers is studied before and after incorporated into 3-D braided composites, as well as after completion of the manufacturing process for 3-D braided composites,to validate the ability of the optic fiber to survive the manufacturing process. On the other hand,the influence of incorporated optic fiber on the original braided composite is also researched by tension and compression experiments. Second, two kinds of optic fiber sensors are co-embedded into 3-D braided composites to evaluate their respective ability to measure the internal strain.Experimental results show that multiple optic fiber sensors can be co-braided into 3-D braided composites to determine their internal strain which is difficult to be fulfilled by other current existing methods.

  10. Electro-optical measurements of 3D-stc detectors fabricated at ITC-irst

    Zoboli, Andrea [INFN and Department of ICT, University of Trento, via Sommarive, 14 - 38050 Povo di Trento (Italy)], E-mail: zoboli@dit.unitn.it; Boscardin, Maurizio [ITC-irst, Microsystems Division, via Sommarive, 18 - 38050 Povo di Trento (Italy); Bosisio, Luciano [INFN and Department of Physics, University of Trieste, via A. Valerio, 2 - 34127 Trieste (Italy); Dalla Betta, Gian-Franco [INFN and Department of ICT, University of Trento, via Sommarive, 14 - 38050 Povo di Trento (Italy); Piemonte, Claudio; Pozza, Alberto; Ronchin, Sabina; Zorzi, Nicola [ITC-irst, Microsystems Division, via Sommarive, 18 - 38050 Povo di Trento (Italy)

    2007-12-11

    In the past two years 3D silicon radiation detectors have been developed at ITC-irst (Trento, Italy). As a first step toward full 3D devices, simplified structures featuring columnar electrodes of one doping type only were fabricated. This paper reports the electro-optical characterization of 3D test diodes made with this approach. Experimental results and TCAD simulations provide good insight into the charge collection mechanism and response speed limitation of these structures.

  11. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3-D refractive index maps

    Kim, Kyoohyun

    2016-01-01

    Optical trapping can be used to manipulate the three-dimensional (3-D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3-D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and the extensive computations. Here, we achieved the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3-D refractive index (RI) distribution of samples. Engineering the 3-D light field distribution of a trapping beam based on the measured 3-D RI map of samples generates a light mould, which can be used to manipulate colloidal and biological samples which have arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can ...

  12. Display of travelling 3D scenes from single integral-imaging capture

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  13. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  14. Accuracy of 3D Imaging Software in Cephalometric Analysis

    2013-06-21

    orthodontic software program ( Dolphin 3D, mfg, city, state) used for measurement and analysis of craniofacial dimensions. Three-dimensional reconstructions...143(8), 899-902. Baik H, Jeon J, Lee H. (2007). Facial soft tissue analysis of Korean adults with normal occlusion using a 3-dimensional laser

  15. 3D Imaging Technology’s Narrative Appropriation in Cinema

    Kiss, Miklós; van den Oever, Annie; Fossati, Giovanna

    2016-01-01

    This chapter traces the cinematic history of stereoscopy by focusing on the contemporary dispute about the values of 3D technology, which are seen as either mere visual attraction or as a technique that perfects the cinematic illusion through increasing perceptual immersion. By taking a neutral stan

  16. Statistical skull models from 3D X-ray images

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  17. 360 degree realistic 3D image display and image processing from real objects

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  18. 360 degree realistic 3D image display and image processing from real objects

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  19. 3D Printing Optical Engine for Controlling Material Microstructure

    Huang, Wei-Chin; Chang, Kuang-Po; Wu, Ping-Han; Wu, Chih-Hsien; Lin, Ching-Chih; Chuang, Chuan-Sheng; Lin, De-Yau; Liu, Sung-Ho; Horng, Ji-Bin; Tsau, Fang-Hei

    Controlling the cooling rate of alloy during melting and resolidification is the most commonly used method for varying the material microstructure and consequently the resuling property. However, the cooling rate of a selective laser melting (SLM) production is restricted by a preset optimal parameter of a good dense product. The head room for locally manipulating material property in a process is marginal. In this study, we invent an Optical Engine for locally controlling material microstructure in a SLM process. It develops an invovative method to control and adjust thermal history of the solidification process to gain desired material microstucture and consequently drastically improving the quality. Process parameters selected locally for specific materials requirement according to designed characteristics by using thermal dynamic principles of solidification process. It utilize a technique of complex laser beam shape of adaptive irradiation profile to permit local control of material characteristics as desired. This technology could be useful for industrial application of medical implant, aerospace and automobile industries.

  20. A flexible new method for 3D measurement based on multi-view image sequences

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  1. A prototype fan-beam optical CT scanner for 3D dosimetry

    Campbell, Warren G.; Rudko, D. A.; Braam, Nicolas A.; Jirasek, Andrew [University of Victoria, Victoria, British Columbia V8P 5C2 (Canada); Wells, Derek M. [British Columbia Cancer Agency, Vancouver Island Centre, Victoria, British Columbia V8R 6V5 (Canada)

    2013-06-15

    Purpose: The objective of this work is to introduce a prototype fan-beam optical computed tomography scanner for three-dimensional (3D) radiation dosimetry. Methods: Two techniques of fan-beam creation were evaluated: a helium-neon laser (HeNe, {lambda} = 543 nm) with line-generating lens, and a laser diode module (LDM, {lambda} = 635 nm) with line-creating head module. Two physical collimator designs were assessed: a single-slot collimator and a multihole collimator. Optimal collimator depth was determined by observing the signal of a single photodiode with varying collimator depths. A method of extending the dynamic range of the system is presented. Two sample types were used for evaluations: nondosimetric absorbent solutions and irradiated polymer gel dosimeters, each housed in 1 liter cylindrical plastic flasks. Imaging protocol investigations were performed to address ring artefacts and image noise. Two image artefact removal techniques were performed in sinogram space. Collimator efficacy was evaluated by imaging highly opaque samples of scatter-based and absorption-based solutions. A noise-based flask registration technique was developed. Two protocols for gel manufacture were examined. Results: The LDM proved advantageous over the HeNe laser due to its reduced noise. Also, the LDM uses a wavelength more suitable for the PRESAGE{sup TM} dosimeter. Collimator depth of 1.5 cm was found to be an optimal balance between scatter rejection, signal strength, and manufacture ease. The multihole collimator is capable of maintaining accurate scatter-rejection to high levels of opacity with scatter-based solutions (T < 0.015%). Imaging protocol investigations support the need for preirradiation and postirradiation scanning to reduce reflection-based ring artefacts and to accommodate flask imperfections and gel inhomogeneities. Artefact removal techniques in sinogram space eliminate streaking artefacts and reduce ring artefacts of up to {approx}40% in magnitude. The

  2. MultiFocus Polarization Microscope (MF-PolScope) for 3D polarization imaging of up to 25 focal planes simultaneously.

    Abrahamsson, Sara; McQuilken, Molly; Mehta, Shalin B; Verma, Amitabh; Larsch, Johannes; Ilic, Rob; Heintzmann, Rainer; Bargmann, Cornelia I; Gladfelter, Amy S; Oldenbourg, Rudolf

    2015-03-23

    We have developed an imaging system for 3D time-lapse polarization microscopy of living biological samples. Polarization imaging reveals the position, alignment and orientation of submicroscopic features in label-free as well as fluorescently labeled specimens. Optical anisotropies are calculated from a series of images where the sample is illuminated by light of different polarization states. Due to the number of images necessary to collect both multiple polarization states and multiple focal planes, 3D polarization imaging is most often prohibitively slow. Our MF-PolScope system employs multifocus optics to form an instantaneous 3D image of up to 25 simultaneous focal-planes. We describe this optical system and show examples of 3D multi-focus polarization imaging of biological samples, including a protein assembly study in budding yeast cells.

  3. Advanced 3-D Ultrasound Imaging: 3-D Synthetic Aperture Imaging using Fully Addressed and Row-Column Addressed 2-D Transducer Arrays

    Bouzari, Hamed

    companies have produced ultrasound scanners using 2-D transducer arrays with enough transducer elements to produce high quality 3-D images. Because of the large matrix transducers with integrated custom electronics, these systems are extremely expensive. The relatively low price of ultrasound scanners......Compared with conventional 2-D ultrasound imaging, real-time 3-D (or 4-D) ultrasound imaging has several advantages, resulting in a significant progress in the ultrasound imaging instrumentation over the past decade. Viewing the patient’s anatomy as a volume helps physicians to comprehend...... the important diagnostic information in a noninvasive manner. Diagnostic and therapeutic decisions often require accurate estimates of e.g., organ, cyst, or tumor volumes. 3-D ultrasound imaging can provide these measurements without relying on the geometrical assumptions and operator-dependent skills involved...

  4. Optical security and anti-counterfeiting using 3D screen printing

    Wu, W. H.; Yang, W. K.; Cheng, S. H.; Kuo, M. K.; Lee, H. W.; Chang, C. C.; Jeng, G. R.; Liu, C. P.

    2007-04-01

    This work presents a novel method for optical decrypted key production by screen printing technology. The key is mainly used to decrypt encoded information hidden inside documents containing Moire patterns and integral photographic 3D auto-stereoscopic images as a second-line security file. The proposed method can also be applied as an anti-counterfeiting measure in artistic screening. Decryption is performed by matching the correct angle between the decoding key and the document with a text or a simple geometric pattern. This study presents the theoretical analysis and experimental results of the decoded key production by the best parameter combination of Moire pattern size and screen printing elements. Experimental results reveal that the proposed method can be applied in anti-counterfeit document design for the fast and low-cost production of decryption key.

  5. Novel metrics and methodology for the characterisation of 3D imaging systems

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Lohse, Niels; Jackson, Michael R.

    2017-04-01

    The modelling, benchmarking and selection process for non-contact 3D imaging systems relies on the ability to characterise their performance. Characterisation methods that require optically compliant artefacts such as matt white spheres or planes, fail to reveal the performance limitations of a 3D sensor as would be encountered when measuring a real world object with problematic surface finish. This paper reports a method of evaluating the performance of 3D imaging systems on surfaces of arbitrary isotropic surface finish, position and orientation. The method involves capturing point clouds from a set of samples in a range of surface orientations and distances from the sensor. Point clouds are processed to create a single performance chart per surface finish, which shows both if a point is likely to be recovered, and the expected point noise as a function of surface orientation and distance from the sensor. In this paper, the method is demonstrated by utilising a low cost pan-tilt table and an active stereo 3D camera. Its performance is characterised by the fraction and quality of recovered data points on aluminium isotropic surfaces ranging in roughness average (Ra) from 0.09 to 0.46 μm at angles of up to 55° relative to the sensor over a distances from 400 to 800 mm to the scanner. Results from a matt white surface similar to those used in previous characterisation methods contrast drastically with results from even the dullest aluminium sample tested, demonstrating the need to characterise sensors by their limitations, not just best case performance.

  6. Rapid reconstruction of 3D neuronal morphology from light microscopy images with augmented rayburst sampling.

    Ming, Xing; Li, Anan; Wu, Jingpeng; Yan, Cheng; Ding, Wenxiang; Gong, Hui; Zeng, Shaoqun; Liu, Qian

    2013-01-01

    Digital reconstruction of three-dimensional (3D) neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM) challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST) system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis.

  7. Rapid reconstruction of 3D neuronal morphology from light microscopy images with augmented rayburst sampling.

    Xing Ming

    Full Text Available Digital reconstruction of three-dimensional (3D neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis.

  8. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images.

  9. Reconstruction of 3d Digital Image of Weepingforsythia Pollen

    Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina

    Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).

  10. Infrared imaging of the polymer 3D-printing process

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  11. Multi-layer 3D imaging using a few viewpoint images and depth map

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  12. 3-D Target Location from Stereoscopic SAR Images

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  13. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  14. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR

    Yun, G. S., E-mail: gunsu@postech.ac.kr; Choi, M. J.; Lee, J.; Kim, M.; Leem, J.; Nam, Y.; Choe, G. H. [Department of Physics, Pohang University of Science and Technology, Pohang 790-784 (Korea, Republic of); Lee, W.; Park, H. K. [Ulsan National Institute of Science and Technology, Ulsan 689-798 (Korea, Republic of); Park, H.; Woo, D. S.; Kim, K. W. [School of Electrical Engineering, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California, Davis, California 95616 (United States); Ito, N. [KASTEC, Kyushu University, Kasuga-shi, Fukuoka 812-8581 (Japan); Mase, A. [Ube National College of Technology, Ube-shi, Yamaguchi 755-8555 (Japan); Lee, S. G. [National Fusion Research Institute, Daejeon 305-333 (Korea, Republic of)

    2014-11-15

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B{sub 0} = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE.

  15. 3D-printed eagle eye: Compound microlens system for foveated imaging

    Thiele, Simon; Arzenbacher, Kathrin; Gissibl, Timo; Giessen, Harald; Herkommer, Alois M.

    2017-01-01

    We present a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor. Our system combines four printed doublet lenses with different focal lengths (equivalent to f = 31 to 123 mm for a 35-mm film) in a 2 × 2 arrangement to achieve a full field of view of 70° with an increasing angular resolution of up to 2 cycles/deg field of view in the center of the image. The footprint of the optics on the chip is below 300 μm × 300 μm, whereas their height is printed in one single step without the necessity for any further assembling or alignment, this approach allows for fast design iterations and can lead to a plethora of different miniaturized multiaperture imaging systems with applications in fields such as endoscopy, optical metrology, optical sensing, surveillance drones, or security. PMID:28246646

  16. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  17. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  18. Analysis of information for cerebrovascular disorders obtained by 3D MR imaging

    Yoshikawa, Kohki [Tokyo Univ. (Japan). Inst. of Medical Science; Yoshioka, Naoki; Watanabe, Fumio; Shiono, Takahiro; Sugishita, Morihiro; Umino, Kazunori

    1995-12-01

    Recently, it becomes easy to analyze information obtained by 3D MR imaging due to remarkable progress of fast MR imaging technique and analysis tool. Six patients suffered from aphasia (4 cerebral infarctions and 2 bleedings) were performed 3D MR imaging (3D FLASH-TR/TE/flip angle; 20-50 msec/6-10 msec/20-30 degrees) and their volume information were analyzed by multiple projection reconstruction (MPR), surface rendering 3D reconstruction, and volume rendering 3D reconstruction using Volume Design PRO (Medical Design Co., Ltd.). Four of them were diagnosed as Broca`s aphasia clinically and their lesions could be detected around the cortices of the left inferior frontal gyrus. Another 2 patients were diagnosed as Wernicke`s aphasia and the lesions could be detected around the cortices of the left supramarginal gyrus. This technique for 3D volume analyses would provide quite exact locational information about cerebral cortical lesions. (author).

  19. Review of three-dimensional (3D) surface imaging for oncoplastic, reconstructive and aesthetic breast surgery.

    O'Connell, Rachel L; Stevens, Roger J G; Harris, Paul A; Rusby, Jennifer E

    2015-08-01

    Three-dimensional surface imaging (3D-SI) is being marketed as a tool in aesthetic breast surgery. It has recently also been studied in the objective evaluation of cosmetic outcome of oncological procedures. The aim of this review is to summarise the use of 3D-SI in oncoplastic, reconstructive and aesthetic breast surgery. An extensive literature review was undertaken to identify published studies. Two reviewers independently screened all abstracts and selected relevant articles using specific inclusion criteria. Seventy two articles relating to 3D-SI for breast surgery were identified. These covered endpoints such as image acquisition, calculations and data obtainable, comparison of 3D and 2D imaging and clinical research applications of 3D-SI. The literature provides a favourable view of 3D-SI. However, evidence of its superiority over current methods of clinical decision making, surgical planning, communication and evaluation of outcome is required before it can be accepted into mainstream practice.

  20. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  1. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  2. Evaluation of stereoscopic 3D displays for image analysis tasks

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  3. Online reconstruction of 3D magnetic particle imaging data

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  4. Building Extraction from DSM Acquired by Airborne 3D Image

    YOU Hongjian; LI Shukai

    2003-01-01

    Segmentation and edge regulation are studied deeply to extract buildings from DSM data produced in this paper. Building segmentation is the first step to extract buildings, and a new segmentation method-adaptive iterative segmentation considering ratio mean square-is proposed to extract the contour of buildings effectively. A sub-image (such as 50× 50 pixels )of the image is processed in sequence,the average gray level and its ratio mean square are calculated first, then threshold of the sub-image is selected by using iterative threshold segmentation. The current pixel is segmented according to the threshold, the aver-age gray level and the ratio mean square of the sub-image. The edge points of the building are grouped according to the azimuth of neighbor points, and then the optimal azimuth of the points that belong to the same group can be calculated by using line interpolation.

  5. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  6. Imaging of discontinuities in nonlinear 3-D seismic inversion

    Carrion, P.M.; Cerveny, V. (PPPG/UFBA, Salvador (Brazil))

    1990-09-01

    The authors present a nonlinear approach for reconstruction of discontinuities in geological environment (earth's crust, say). The advantage of the proposed method is that it is not limited to a Born approximation (small angles of propagation and weak scatterers). One can expect significantly better images since larger apertures including wide angle reflection arrivals can be incorporated into the imaging operator. In this paper, they treat only compressional body waves: shear and surface waves are considered as noise.

  7. Real-time auto-stereoscopic visualization of 3D medical images

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  8. Contactless operating table control based on 3D image processing.

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  9. Image quality of a cone beam O-arm 3D imaging system

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  10. Air-touch interaction system for integral imaging 3D display

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  11. Radar Imaging of Spheres in 3D using MUSIC

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  12. Design of extended viewing zone at autostereoscopic 3D display based on diffusing optical element

    Kim, Min Chang; Hwang, Yong Seok; Hong, Suk-Pyo; Kim, Eun Soo

    2012-03-01

    In this paper, to realize a non-glasses type 3D display as next step from the current glasses-typed 3D display, it is suggested that a viewing zone is designed for the 3D display using DOE (Diffusing Optical Element). Viewing zone of proposed method is larger than that of the current parallax barrier method or lenticular method. Through proposed method, it is shown to enable the expansion and adjustment of the area of viewing zone according to viewing distance.

  13. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation

    Calafiore, Giuseppe; Koshelev, Alexander; Allen, Frances I.; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-09-01

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three-dimensional structure achieved by direct nanoimprint lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the good lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enable advancements in areas such as integrated optics and sensing, achieving enhanced portability and versatility of fiber optic components.

  14. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation.

    Calafiore, Giuseppe; Koshelev, Alexander; Allen, Frances I; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-09-16

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three-dimensional structure achieved by direct nanoimprint lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the good lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enable advancements in areas such as integrated optics and sensing, achieving enhanced portability and versatility of fiber optic components.

  15. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.

  16. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  17. Noninvasive metabolic imaging of engineered 3D human adipose tissue in a perfusion bioreactor.

    Andrew Ward

    Full Text Available The efficacy and economy of most in vitro human models used in research is limited by the lack of a physiologically-relevant three-dimensional perfused environment and the inability to noninvasively quantify the structural and biochemical characteristics of the tissue. The goal of this project was to develop a perfusion bioreactor system compatible with two-photon imaging to noninvasively assess tissue engineered human adipose tissue structure and function in vitro. Three-dimensional (3D vascularized human adipose tissues were engineered in vitro, before being introduced to a perfusion environment and tracked over time by automated quantification of endogenous markers of metabolism using two-photon excited fluorescence (TPEF. Depth-resolved image stacks were analyzed for redox ratio metabolic profiling and compared to prior analyses performed on 3D engineered adipose tissue in static culture. Traditional assessments with H&E staining were used to qualitatively measure extracellular matrix generation and cell density with respect to location within the tissue. The distribution of cells within the tissue and average cellular redox ratios were different between static and perfusion cultures, while the trends of decreased redox ratio and increased cellular proliferation with time in both static and perfusion cultures were similar. These results establish a basis for noninvasive optical tracking of tissue structure and function in vitro, which can be applied to future studies to assess tissue development or drug toxicity screening and disease progression.

  18. Noninvasive metabolic imaging of engineered 3D human adipose tissue in a perfusion bioreactor.

    Ward, Andrew; Quinn, Kyle P; Bellas, Evangelia; Georgakoudi, Irene; Kaplan, David L

    2013-01-01

    The efficacy and economy of most in vitro human models used in research is limited by the lack of a physiologically-relevant three-dimensional perfused environment and the inability to noninvasively quantify the structural and biochemical characteristics of the tissue. The goal of this project was to develop a perfusion bioreactor system compatible with two-photon imaging to noninvasively assess tissue engineered human adipose tissue structure and function in vitro. Three-dimensional (3D) vascularized human adipose tissues were engineered in vitro, before being introduced to a perfusion environment and tracked over time by automated quantification of endogenous markers of metabolism using two-photon excited fluorescence (TPEF). Depth-resolved image stacks were analyzed for redox ratio metabolic profiling and compared to prior analyses performed on 3D engineered adipose tissue in static culture. Traditional assessments with H&E staining were used to qualitatively measure extracellular matrix generation and cell density with respect to location within the tissue. The distribution of cells within the tissue and average cellular redox ratios were different between static and perfusion cultures, while the trends of decreased redox ratio and increased cellular proliferation with time in both static and perfusion cultures were similar. These results establish a basis for noninvasive optical tracking of tissue structure and function in vitro, which can be applied to future studies to assess tissue development or drug toxicity screening and disease progression.

  19. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  20. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume

    2012-01-01

    the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels...

  1. Hybrid Method for 3D Segmentation of Magnetic Resonance Images

    ZHANGXiang; ZHANGDazhi; TIANJinwen; LIUJian

    2003-01-01

    Segmentation of some complex images, especially in magnetic resonance brain images, is often difficult to perform satisfactory results using only single approach of image segmentation. An approach towards the integration of several techniques seems to be the best solution. In this paper a new hybrid method for 3-dimension segmentation of the whole brain is introduced, based on fuzzy region growing, edge detection and mathematical morphology, The gray-level threshold, controlling the process of region growing, is determined by fuzzy technique. The image gradient feature is obtained by the 3-dimension sobel operator considering a 3×3×3 data block with the voxel to be evaluated at the center, while the gradient magnitude threshold is defined by the gradient magnitude histogram of brain magnetic resonance volume. By the combined methods of edge detection and region growing, the white matter volume of human brain is segmented perfectly. By the post-processing using mathematical morphological techniques, the whole brain region is obtained. In order to investigate the validity of the hybrid method, two comparative experiments, the region growing method using only gray-level feature and the thresholding method by combining gray-level and gradient features, are carried out. Experimental results indicate that the proposed method provides much better results than the traditional method using a single technique in the 3-dimension segmentation of human brain magnetic resonance data sets.

  2. High definition 3D imaging lidar system using CCD

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong

    2016-10-01

    In this study we propose and demonstrate a novel technique for measuring distance with high definition three-dimensional imaging. To meet the stringent requirements of various missions, spatial resolution and range precision are important properties for flash LIDAR systems. The proposed LIDAR system employs a polarization modulator and a CCD. When a laser pulse is emitted from the laser, it triggers the polarization modulator. The laser pulse is scattered by the target and is reflected back to the LIDAR system while the polarization modulator is rotating. Its polarization state is a function of time. The laser-return pulse passes through the polarization modulator in a certain polarization state, and the polarization state is calculated using the intensities of the laser pulses measured by the CCD. Because the function of the time and the polarization state is already known, the polarization state can be converted to time-of-flight. By adopting a polarization modulator and a CCD and only measuring the energy of a laser pulse to obtain range, a high resolution three-dimensional image can be acquired by the proposed three-dimensional imaging LIDAR system. Since this system only measures the energy of the laser pulse, a high bandwidth detector and a high resolution TDC are not required for high range precision. The proposed method is expected to be an alternative method for many three-dimensional imaging LIDAR system applications that require high resolution.

  3. Intraoperative 3D Ultrasonography for Image-Guided Neurosurgery

    Letteboer, Marloes Maria Johanna

    2004-01-01

    Stereotactic neurosurgery has evolved dramatically in recent years from the original rigid frame-based systems to the current frameless image-guided systems, which allow greater flexibility while maintaining sufficient accuracy. As these systems continue to evolve, more applications are found, and i

  4. 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation

    Hu, Qi; Duan, Jin; Zhai, Di; Wang, LiNing

    2016-10-01

    With the continuous development of industrialization, 3D printing technology steps into individuals' lives gradually, however, the consequential security issue has become the urgent problem which is imminent. This paper proposes the 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation and utilizes authorized key to restrict 3D model printing's permissions. Firstly, algorithms put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform and put the transformed coefficient into Fresnel transformation. Use math model to embed watermark information into it and finally generate 3D digital model with watermarking. This paper adopts VC++.NET and DIRECTX 9.0 SDK for combined developing and testing, and the results show that in fixed affine space, achieve the robustness in translation, revolving and proportion transforms of 3D model and better watermark-invisibility. The security and authorization of 3D model have been protected effectively.

  5. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  6. A Conceptual Design For A Spaceborne 3D Imaging Lidar

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2002-01-01

    First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.

  7. Extended gray level co-occurrence matrix computation for 3D image volume

    Salih, Nurulazirah M.; Dewi, Dyah Ekashanti Octorina

    2017-02-01

    Gray Level Co-occurrence Matrix (GLCM) is one of the main techniques for texture analysis that has been widely used in many applications. Conventional GLCMs usually focus on two-dimensional (2D) image texture analysis only. However, a three-dimensional (3D) image volume requires specific texture analysis computation. In this paper, an extended 2D to 3D GLCM approach based on the concept of multiple 2D plane positions and pixel orientation directions in the 3D environment is proposed. The algorithm was implemented by breaking down the 3D image volume into 2D slices based on five different plane positions (coordinate axes and oblique axes) resulting in 13 independent directions, then calculating the GLCMs. The resulted GLCMs were averaged to obtain normalized values, then the 3D texture features were calculated. A preliminary examination was performed on a 3D image volume (64 x 64 x 64 voxels). Our analysis confirmed that the proposed technique is capable of extracting the 3D texture features from the extended GLCMs approach. It is a simple and comprehensive technique that can contribute to the 3D image analysis.

  8. Synthesis of image sequences for Korean sign language using 3D shape model

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  9. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-09-19

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  10. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    Yingzhi Kan

    2016-09-01

    Full Text Available In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D imaging is proposed that uses a two-dimensional (2-D plane antenna array. First, a two-dimensional fast Fourier transform (FFT is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT combined with 2-D inverse FFT (IFFT is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  11. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  12. Oscillating optical tweezer-based 3-D confocal microrheometer for investigating the intracellular micromechanics and structures

    Ou-Yang, H. D.; Rickter, E. A.; Pu, C.; Latinovic, O.; Kumar, A.; Mengistu, M.; Lowe-Krentz, L.; Chien, S.

    2005-08-01

    Mechanical properties of living biological cells are important for cells to maintain their shapes, support mechanical stresses and move through tissue matrix. The use of optical tweezers to measure micromechanical properties of cells has recently made significant progresses. This paper presents a new approach, the oscillating optical tweezer cytorheometer (OOTC), which takes advantage of the coherent detection of harmonically modulated particle motions by a lock-in amplifier to increase sensitivity, temporal resolution and simplicity. We demonstrate that OOTC can measure the dynamic mechanical modulus in the frequency range of 0.1-6,000 Hz at a rate as fast as 1 data point per second with submicron spatial resolution. More importantly, OOTC is capable of distinguishing the intrinsic non-random temporal variations from random fluctuations due to Brownian motion; this capability, not achievable by conventional approaches, is particular useful because living systems are highly dynamic and often exhibit non-thermal, rhythmic behavior in a broad time scale from a fraction of a second to hours or days. Although OOTC is effective in measuring the intracellular micromechanical properties, unless we can visualize the cytoskeleton in situ, the mechanical property data would only be as informative as that of "Blind men and the Elephant". To solve this problem, we take two steps, the first, to use of fluorescent imaging to identify the granular structures trapped by optical tweezers, and second, to integrate OOTC with 3-D confocal microscopy so we can take simultaneous, in situ measurements of the micromechanics and intracellular structure in living cells. In this paper, we discuss examples of applying the oscillating tweezer-based cytorheometer for investigating cultured bovine endothelial cells, the identification of caveolae as some of the granular structures in the cell as well as our approach to integrate optical tweezers with a spinning disk confocal microscope.

  13. How accurate are the fusion of cone-beam CT and 3-D stereophotographic images?

    Yasas S N Jayaratne

    Full Text Available BACKGROUND: Cone-beam Computed Tomography (CBCT and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1 to evaluate the feasibility of integrating 3-D Photos and CBCT images 2 to assess degree of error that may occur during the above processes and 3 to identify facial regions that would be most appropriate for 3-D image registration. METHODOLOGY: CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS error. PRINCIPAL FINDINGS: The signed average and RMS of the distance differences between the registered surfaces were -0.018 (±0.129 mm and 0.739 (±0.239 mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. CONCLUSIONS: CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning.

  14. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  15. Design, Simulation and Optimisation of a Fibre-optic 3D Accelerometer

    Yang, Zhen; Fang, Xiao-Yong; Zhou, Yan; Li, Ya-lin; Yuan, Jie; Cao, Mao-Sheng

    2013-07-01

    Using an inertia pendulum comprised of two prisms, flexible beams and an elastic flake, we present a novel fibre-optic 3D accelerometer design. The total reverse reflection of the cube-corner prism and the spectroscopic property of an orthogonal holographic grating enable the measurement of the two transverse components of the 3D acceleration simultaneously, while the longitudinal component can be determined from the elastic deformation of the flake. Due to optical interferometry, this sensor may provide a wider range, higher sensitivity and better resolving power than other accelerometers. Moreover, we use finite element analysis to study the performance and to optimise the structural design of the sensor.

  16. Anesthesiology training using 3D imaging and virtual reality

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  17. Combining Street View and Aerial Images to Create Photo-Realistic 3D City Models

    Ivarsson, Caroline

    2014-01-01

    This thesis evaluates two different approaches of using panoramic street view images for creating more photo-realistic 3D city models comparing to 3D city models based on only aerial images. The thesis work has been carried out at Blom Sweden AB with the use of their software and data. The main purpose of this thesis work has been to investigate if street view images can aid in creating more photo-realistic 3D city models on street level through an automatic or semi-automatic approach. Two di...

  18. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    N. Soontranon; Srestasathiern, P.; Lawawirojwong, S.

    2015-01-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around $1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architec...

  19. Optical Measurement of Micromechanics and Structure in a 3D Fibrin Extracellular Matrix

    Kotlarchyk, Maxwell Aaron

    2011-07-01

    In recent years, a significant number of studies have focused on linking substrate mechanics to cell function using standard methodologies to characterize the bulk properties of the hydrogel substrates. However, current understanding of the correlations between the microstructural mechanical properties of hydrogels and cell function in 3D is poor, in part because of a lack of appropriate techniques. Methods for tuning extracellular matrix (ECM) mechanics in 3D cell culture that rely on increasing the concentration of either protein or cross-linking molecules fail to control important parameters such as pore size, ligand density, and molecular diffusivity. Alternatively, ECM stiffness can be modulated independently from protein concentration by mechanically loading the ECM. We have developed an optical tweezers-based microrheology system to investigate the fundamental role of ECM mechanical properties in determining cellular behavior. Further, this thesis outlines the development of a novel device for generating stiffness gradients in naturally derived ECMs, where stiffness is tuned by inducing strain, while local structure and mechanical properties are directly determined by laser tweezers-based passive and active microrheology respectively. Hydrogel substrates polymerized within 35 mm diameter Petri dishes are strained non-uniformly by the precise rotation of an embedded cylindrical post, and exhibit a position-dependent stiffness with little to no modulation of local mesh geometry. Here we present microrheological studies in the context of fibrin hydrogels. Microrheology and confocal imaging were used to directly measure local changes in micromechanics and structure respectively in unstrained hydrogels of increasing fibrinogen concentration, as well as in our strain gradient device, in which the concentration of fibrinogen is held constant. Orbital particle tracking, and raster image correlation analysis are used to quantify changes in fibrin mechanics on the

  20. Simulating receptive fields of human visual cortex for 3D image quality prediction.

    Shao, Feng; Chen, Wanting; Lin, Wenchong; Jiang, Qiuping; Jiang, Gangyi

    2016-07-20

    Quality assessment of 3D images presents many challenges when attempting to gain better understanding of the human visual system. In this paper, we propose a new 3D image quality prediction approach by simulating receptive fields (RFs) of human visual cortex. To be more specific, we extract the RFs from a complete visual pathway, and calculate their similarity indices between the reference and distorted 3D images. The final quality score is obtained by determining their connections via support vector regression. Experimental results on three 3D image quality assessment databases demonstrate that in comparison with the most relevant existing methods, the devised algorithm achieves high consistency alignment with subjective assessment, especially for asymmetrically distorted stereoscopic images.

  1. Characterization of 3D printing output using an optical sensing system

    Straub, Jeremy

    2015-05-01

    This paper presents the experimental design and initial testing of a system to characterize the progress and performance of a 3D printer. The system is based on five Raspberry Pi single-board computers. It collects images of the 3D printed object, which are compared to an ideal model. The system, while suitable for printers of all sizes, can potentially be produced at a sufficiently low cost to allow its incorporation into consumer-grade printers. The efficacy and accuracy of this system is presented and discussed. The paper concludes with a discussion of the benefits of being able to characterize 3D printer performance.

  2. Close-range optical measurement of aircraft's 3D attitude and accuracy evaluation

    Zhe Li; Zhenliang Ding; Feng Yuan

    2008-01-01

    A new screen-spot imaging method based on optical measurement is proposed, which is applicable to the close-range measurement of aircraft's three-dimensional (3D) attitude parameters. Laser tracker is used to finish the global calibrations of the high-speed cameras and the fixed screens on test site, as well as to establish media-coordinate-frames among various coordinate systems. The laser cooperation object mounted on the aircraft surface projects laser beams on the screens and the high-speed cameras syn-chronously record the light-spots' position changing with aircraft attitude. The recorded image sequences are used to compute the aircraft attitude parameters. Based on the matrix analysis, the error sources of the measurement accuracy are analyzed, and the maximum relative error of this mathematical model is estimated. The experimental result shows that this method effectively makes the change of aircraft position distinguishable, and the error of this method is no more than 3' while the rotation angles of three axes are within a certain range.

  3. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  4. A framework for human spine imaging using a freehand 3D ultrasound system

    Purnama, Ketut E.; Wilkinson, Michael. H. F.; Veldhuizen, Albert G.; van Ooijen, Peter. M. A.; Lubbers, Jaap; Burgerhof, Johannes G. M.; Sardjono, Tri A.; Verkerke, Gijbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  5. Internal Strain Measurement in 3D Braided Composites Using Co-braided Optical Fiber Sensors

    Shenfang YUAN; Rui HUANG; Yunjiang RAO

    2004-01-01

    3D braided composite technology has stimulated a great deal of interest in the world at large. But due to the threedimensional nature of these kinds of composites, coupled with the shortcomings of currently-adopted experimental test methods, it is difficult to measure the internal parameters of this materials, hence causes it difficult to understand the material performance. A new method is introduced herein to measure the internal strain of braided composite materials using co-braided fiber optic sensors. Two kinds of fiber optic sensors are co-braided into 3D braided composites to measure internal strain. One of these is the Fabry-Parrot (F-P) fiber optic sensor; the other is the polarimetric fiber optic sensor. Experiments are conducted to measure internal strain under tension, bending and thermal environments in the 3D carbon fiber braided composite specimens, both locally and globally. Experimental results show that multiple fiber optic sensors can be braided into the 3D braided composites to measure the internal parameters, providing a more accurate measurement method and leading to a better understanding of these materials.

  6. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    2013-01-01

    An ultrasound imaging system (300) includes a transducer array (302) with a two- dimensional array of transducer elements configured to transmit an ultrasound signal and receive echoes, transmit circuitry (304) configured to control the transducer array to transmit the ultrasound signal so...... as to traverse a field of view, and receive circuitry (306) configured to receive a two dimensional set of echoes produced in response to the ultrasound signal traversing structure in the field of view, wherein the structure includes flowing structures such as flowing blood cells, organ cells etc. A beamformer...... (312) configured to beamform the echoes, and a velocity processor (314) configured to separately determine a depth velocity component, a transverse velocity component and an elevation velocity component, wherein the velocity components are determined based on the same transmitted ultrasound signal...

  7. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation

    Calafiore, Giuseppe; Allen, Frances I; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-01-01

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three dimensional structure achieved by direct Nanoimprint Lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the excellent lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enabl...

  8. Monocular accommodation condition in 3D display types through geometrical optics

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  9. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  10. D3D augmented reality imaging system: proof of concept in mammography

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  11. A high-level 3D visualization API for Java and ImageJ

    Longair Mark

    2010-05-01

    Full Text Available Abstract Background Current imaging methods such as Magnetic Resonance Imaging (MRI, Confocal microscopy, Electron Microscopy (EM or Selective Plane Illumination Microscopy (SPIM yield three-dimensional (3D data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  12. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography.

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-08-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented.

  13. Software for browsing sectioned images of a dog body and generating a 3D model.

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  14. An adaptive 3-D discrete cosine transform coder for medical image compression.

    Tai, S C; Wu, Y G; Lin, C W

    2000-09-01

    In this communication, a new three-dimensional (3-D) discrete cosine transform (DCT) coder for medical images is presented. In the proposed method, a segmentation technique based on the local energy magnitude is used to segment subblocks of the image into different energy levels. Then, those subblocks with the same energy level are gathered to form a 3-D cuboid. Finally, 3-D DCT is employed to compress the 3-D cuboid individually. Simulation results show that the reconstructed images achieve a bit rate lower than 0.25 bit per pixel even when the compression ratios are higher than 35. As compared with the results by JPEG and other strategies, it is found that the proposed method achieves better qualities of decoded images than by JPEG and the other strategies.

  15. A Compact, Wide Area Surveillance 3D Imaging LIDAR Providing UAS Sense and Avoid Capabilities Project

    National Aeronautics and Space Administration — Eye safe 3D Imaging LIDARS when combined with advanced very high sensitivity, large format receivers can provide a robust wide area search capability in a very...

  16. Resolution doubling in 3D-STORM imaging through improved buffers.

    Nicolas Olivier

    Full Text Available Super-resolution imaging methods have revolutionized fluorescence microscopy by revealing the nanoscale organization of labeled proteins. In particular, single-molecule methods such as Stochastic Optical Reconstruction Microscopy (STORM provide resolutions down to a few tens of nanometers by exploiting the cycling of dyes between fluorescent and non-fluorescent states to obtain a sparse population of emitters and precisely localizing them individually. This cycling of dyes is commonly induced by adding different chemicals, which are combined to create a STORM buffer. Despite their importance, the composition of these buffers has scarcely evolved since they were first introduced, fundamentally limiting what can be resolved with STORM. By identifying a new chemical suitable for STORM and optimizing the buffer composition for Alexa-647, we significantly increased the number of photons emitted per cycle by each dye, providing a simple means to enhance the resolution of STORM independently of the optical setup used. Using this buffer to perform 3D-STORM on biological samples, we obtained images with better than 10 nanometer lateral and 30 nanometer axial resolution.

  17. Rapid Prototyping across the Spectrum: RF to Optical 3D Electromagnetic Structures

    2015-11-17

    of Texas at El Paso, 2014. [19] A. Sihvola, " Electromagnetic Emergence in Metamaterials," in Advances in Electromagnetics of Complex Media and...complex power of radiating elements under electromagnetic source transformation," Microwave and Optical Technology Letters, vol. 53, pp. 1524-1527...AFRL-RW-EG-TP-2015-002 Rapid Prototyping across the Spectrum: RF to Optical 3D Electromagnetic Structures Jeffery W. Allen Monica S. Allen Brett

  18. 3D Modeling of Transformer Substation Based on Mapping and 2D Images

    Lei Sun

    2016-01-01

    Full Text Available A new method for building 3D models of transformer substation based on mapping and 2D images is proposed in this paper. This method segments objects of equipment in 2D images by using k-means algorithm in determining the cluster centers dynamically to segment different shapes and then extracts feature parameters from the divided objects by using FFT and retrieves the similar objects from 3D databases and then builds 3D models by computing the mapping data. The method proposed in this paper can avoid the complex data collection and big workload by using 3D laser scanner. The example analysis shows the method can build coarse 3D models efficiently which can meet the requirements for hazardous area classification and constructions representations of transformer substation.

  19. High-throughput imaging: Focusing in on drug discovery in 3D.

    Li, Linfeng; Zhou, Qiong; Voss, Ty C; Quick, Kevin L; LaBarbera, Daniel V

    2016-03-01

    3D organotypic culture models such as organoids and multicellular tumor spheroids (MCTS) are becoming more widely used for drug discovery and toxicology screening. As a result, 3D culture technologies adapted for high-throughput screening formats are prevalent. While a multitude of assays have been reported and validated for high-throughput imaging (HTI) and high-content screening (HCS) for novel drug discovery and toxicology, limited HTI/HCS with large compound libraries have been reported. Nonetheless, 3D HTI instrumentation technology is advancing and this technology is now on the verge of allowing for 3D HCS of thousands of samples. This review focuses on the state-of-the-art high-throughput imaging systems, including hardware and software, and recent literature examples of 3D organotypic culture models employing this technology for drug discovery and toxicology screening.

  20. 3-D printed sensing patches with embedded polymer optical fibre Bragg gratings

    Zubel, Michal G.; Sugden, Kate; Saez-Rodriguez, D.

    2016-01-01

    The first demonstration of a polymer optical fibre Bragg grating (POFBG) embedded in a 3-D printed structure is reported. Its cyclic strain performance and temperature characteristics are examined and discussed. The sensing patch has a repeatable strain sensitivity of 0.38 pm/mu epsilon. Its...

  1. GEOMETRIC OPTICS FOR 3D-HARTREE-TYPE EQUATION WITH COULOMB POTENTIAL

    2006-01-01

    This article considers a family of 3D-Hartree-type equation with Coulomb potential |x|-1, whose initial data oscillates so that a caustic appears. In the linear geometric optics case, by using the Lagrangian integrals, a uniform description of the solution outside the caustic, and near the caustic are obtained.

  2. 3-D printed sensing patches with embedded polymer optical fibre Bragg gratings

    Zubel, Michal G.; Sugden, Kate; Saez-Rodriguez, D.;

    2016-01-01

    The first demonstration of a polymer optical fibre Bragg grating (POFBG) embedded in a 3-D printed structure is reported. Its cyclic strain performance and temperature characteristics are examined and discussed. The sensing patch has a repeatable strain sensitivity of 0.38 pm/mu epsilon. Its temp...

  3. Fiber Optic 3-D Space Piezoelectric Accelerometer and its Antinoise Technology

    2001-01-01

    The mechanical structure of piezoelectric accelerometer is designed, and the operation equations on X-, Y-, and Z-axes are deduced. The test results of 3-D frequency response are given. Noise disturbances are effectively eliminated by using fiber optic transmission and synchronous detection.

  4. Rewritable 3D bit optical data storage in a PMMA-based photorefractive polymer

    Day, D.; Gu, M. [Swinburne Univ. of Tech., Hawthorn, Vic. (Australia). Centre for Micro-Photonics; Smallridge, A. [Victoria Univ., Melbourne (Australia). School of Life Sciences and Technology

    2001-07-04

    A cheap, compact, and rewritable high-density optical data storage system for CD and DVD applications is presented by the authors. Continuous-wave illumination under two-photon excitation in a new poly(methylmethacrylate) (PMMA) based photorefractive polymer allows 3D bit storage of sub-Tbyte data. (orig.)

  5. 3D-DXA: Assessing the Femoral Shape, the Trabecular Macrostructure and the Cortex in 3D from DXA images.

    Humbert, Ludovic; Martelli, Yves; Fonolla, Roger; Steghofer, Martin; Di Gregorio, Silvana; Malouf, Jorge; Romera, Jordi; Barquero, Luis Miguel Del Rio

    2017-01-01

    The 3D distribution of the cortical and trabecular bone mass in the proximal femur is a critical component in determining fracture resistance that is not taken into account in clinical routine Dual-energy X-ray Absorptiometry (DXA) examination. In this paper, a statistical shape and appearance model together with a 3D-2D registration approach are used to model the femoral shape and bone density distribution in 3D from an anteroposterior DXA projection. A model-based algorithm is subsequently used to segment the cortex and build a 3D map of the cortical thickness and density. Measurements characterising the geometry and density distribution were computed for various regions of interest in both cortical and trabecular compartments. Models and measurements provided by the "3D-DXA" software algorithm were evaluated using a database of 157 study subjects, by comparing 3D-DXA analyses (using DXA scanners from three manufacturers) with measurements performed by Quantitative Computed Tomography (QCT). The mean point-to-surface distance between 3D-DXA and QCT femoral shapes was 0.93 mm. The mean absolute error between cortical thickness and density estimates measured by 3D-DXA and QCT was 0.33 mm and 72 mg/cm(3). Correlation coefficients (R) between the 3D-DXA and QCT measurements were 0.86, 0.93, and 0.95 for the volumetric bone mineral density at the trabecular, cortical, and integral compartments respectively, and 0.91 for the mean cortical thickness. 3D-DXA provides a detailed analysis of the proximal femur, including a separate assessment of the cortical layer and trabecular macrostructure, which could potentially improve osteoporosis management while maintaining DXA as the standard routine modality.

  6. Wide area 2D/3D imaging development, analysis and applications

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  7. A novel modeling method for manufacturing hearing aid using 3D medical images

    Kim, Hyeong Gyun [Dept of Radiological Science, Far East University, Eumseong (Korea, Republic of)

    2016-06-15

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape.

  8. A 2D and 3D electrical impedance tomography imaging using experimental data

    Shulga, Dmitry

    2012-01-01

    In this paper model, method and results of 2D and 3D conductivity distribution imaging using experimental data are described. The 16-electrodes prototype of computer tomography system, special Matlab and Java software were used to perform imaging procedure. The developed system can be used for experimental conductivity distribution imaging and further research work.

  9. The application of camera calibration in range-gated 3D imaging technology

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  10. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  11. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  12. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  13. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  14. 3D optical vortices generated by micro-optical elements and its novel applications

    BU J.; LIN J.; K. J. Moh; B. P. S. Ahluwalia; CHEN H. L.; PENG X.; NIU H. B.; YUAN X.C.

    2007-01-01

    In this paper we report on recent development in the areas of optical vortices generated by micro-optical elements and applications of optical vortices, including optical manipulation, radial polarization and secure free space optical communication

  15. A new technique of recognition for coded targets in optical 3D measurement

    Guo, Changye; Cheng, Xiaosheng; Cui, Haihua; Dai, Ning; Weng, Jinping

    2014-11-01

    A new technique for coded targets recognition in optical 3D-measurement application is proposed in this paper. Traditionally, point cloud registration is based on homologous features, such as the curvature, which is time-consuming and not reliable. For this, we paste some coded targets onto the surface of the object to be measured to improve the optimum target location and accurate correspondence among multi-source images. Circular coded targets are used, and an algorithm to automatically detecting them is proposed. This algorithm extracts targets with intensive bimodal histogram features from complex background, and filters noise according to their size, shape and intensity. In addition, the coded targets' identification is conducted out by their ring codes. We affine them around the circle inversely, set foreground and background respectively as 1 and 0 to constitute a binary number, and finally shift one bit every time to calculate a decimal one of the binary number to determine a minimum decimal number as its code. In this 3Dmeasurement application, we build a mutual relationship between different viewpoints containing three or more coded targets with different codes. Experiments show that it is of efficiency to obtain global surface data of an object to be measured and is robust to the projection angles and noise.

  16. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  17. 3D soft tissue imaging with a mobile C-arm.

    Ritter, Dieter; Orman, Jasmina; Schmidgunst, Christian; Graumann, Rainer

    2007-03-01

    We introduce a clinical prototype for 3D soft tissue imaging to support surgical or interventional procedures based on a mobile C-arm. An overview of required methods and materials is followed by first clinical images of animals and human patients including dosimetry. The mobility and flexibility of 3D C-arms gives free access to the patient and therefore avoids relocation of the patient between imaging and surgical intervention. Image fusion with diagnostic data (MRI, CT, PET) is demonstrated and promising applications for brachytherapy, RFTT and others are discussed.

  18. Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy

    Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schrödel, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

    2014-01-01

    3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

  19. Development of goniophotometric imaging system for recording reflectance spectra of 3D objects

    Tonsho, Kazutaka; Akao, Y.; Tsumura, Norimichi; Miyake, Yoichi

    2001-12-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.

  20. Intelligent multisensor concept for image-guided 3D object measurement with scanning laser radar

    Weber, Juergen

    1995-08-01

    This paper presents an intelligent multisensor concept for measuring 3D objects using an image guided laser radar scanner. The field of application are all kinds of industrial inspection and surveillance tasks where it is necessary to detect, measure and recognize 3D objects in distances up to 10 m with high flexibility. Such applications might be the surveillance of security areas or container storages as well as navigation and collision avoidance of autonomous guided vehicles. The multisensor system consists of a standard CCD matrix camera and a 1D laser radar ranger which is mounted to a 2D mirror scanner. With this sensor combination it is possible to acquire gray scale intensity data as well as absolute 3D information. To improve the system performance and flexibility, the intensity data of the scene captured by the camera can be used to focus the measurement of the 3D sensor to relevant areas. The camera guidance of the laser scanner is useful because the acquisition of spatial information is relatively slow compared to the image sensor's ability to snap an image frame in 40 ms. Relevant areas in a scene are located by detecting edges of objects utilizing various image processing algorithms. The complete sensor system is controlled by three microprocessors carrying out the 3D data acquisition, the image processing tasks and the multisensor integration. The paper deals with the details of the multisensor concept. It describes the process of sensor guidance and 3D measurement and presents some practical results of our research.

  1. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  2. Performance of an improved first generation optical CT scanner for 3D dosimetry.

    Qian, Xin; Adamovics, John; Wuu, Cheng-Shie

    2013-12-21

    Performance analysis of a modified 3D dosimetry optical scanner based on the first generation optical CT scanner OCTOPUS is presented. The system consists of PRESAGE dosimeters, the modified 3D scanner, and a new developed in-house user control panel written in Labview program which provides more flexibility to optimize mechanical control and data acquisition technique. The total scanning time has been significantly reduced from initial 8 h to ∼2 h by using the modified scanner. The functional performance of the modified scanner has been evaluated in terms of the mechanical integrity uncertainty of the data acquisition process. Optical density distribution comparison between the modified scanner, OCTOPUS and the treatment plan system has been studied. It has been demonstrated that the agreement between the modified scanner and treatment plans is comparable with that between the OCTOPUS and treatment plans.

  3. Generation of nearly 3D-unpolarized evanescent optical near fields using total internal reflection.

    Hassinen, Timo; Popov, Sergei; Friberg, Ari T; Setälä, Tero

    2016-07-01

    We analyze the time-domain partial polarization of optical fields composed of two evanescent waves created in total internal reflection by random electromagnetic beams with orthogonal planes of incidence. We show that such a two-beam configuration enables to generate nearly unpolarized, genuine three-component (3D) near fields. This result complements earlier studies on spectral polarization, which state that at least three symmetrically propagating beams are required to produce a 3D-unpolarized near field. The degree of polarization of the near field can be controlled by adjusting the polarization states and mutual correlation of the incident beams.

  4. 3D-MSCT imaging of bullet trajectory in 3D crime scene reconstruction: two case reports.

    Colard, T; Delannoy, Y; Bresson, F; Marechal, C; Raul, J S; Hedouin, V

    2013-11-01

    Postmortem investigations are increasingly assisted by three-dimensional multi-slice computed tomography (3D-MSCT) and have become more available to forensic pathologists over the past 20years. In cases of ballistic wounds, 3D-MSCT can provide an accurate description of the bullet location, bone fractures and, more interestingly, a clear visual of the intracorporeal trajectory (bullet track). These forensic medical examinations can be combined with tridimensional bullet trajectory reconstructions created by forensic ballistic experts. These case reports present the implementation of tridimensional methods and the results of 3D crime scene reconstruction in two cases. The authors highlight the value of collaborations between police forensic experts and forensic medicine institutes through the incorporation of 3D-MSCT data in a crime scene reconstruction, which is of great interest in forensic science as a clear visual communication tool between experts and the court.

  5. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Ruisong Ye

    2012-07-01

    Full Text Available An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency domain. Experimental results show that the stego-image looks visually identical to the original host one and the secret image can be effectively extracted upon image processing attacks, which demonstrates strong robustness against a variety of attacks.

  6. 3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.

    Stárek, Zdeněk; Lehar, František; Jež, Jiří; Wolf, Jiří; Novák, Miroslav

    2014-10-01

    Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20 years. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models.

  7. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  8. 3D city models completion by fusing lidar and image data

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  9. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  10. Image Sequence Fusion and Denoising Based on 3D Shearlet Transform

    Liang Xu

    2014-01-01

    Full Text Available We propose a novel algorithm for image sequence fusion and denoising simultaneously in 3D shearlet transform domain. In general, the most existing image fusion methods only consider combining the important information of source images and do not deal with the artifacts. If source images contain noises, the noises may be also transferred into the fusion image together with useful pixels. In 3D shearlet transform domain, we propose that the recursive filter is first performed on the high-pass subbands to obtain the denoised high-pass coefficients. The high-pass subbands are then combined to employ the fusion rule of the selecting maximum based on 3D pulse coupled neural network (PCNN, and the low-pass subband is fused to use the fusion rule of the weighted sum. Experimental results demonstrate that the proposed algorithm yields the encouraging effects.

  11. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Bieniosek, Matthew F. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, California 94305 (United States); Lee, Brian J. [Department of Mechanical Engineering, Stanford University, 440 Escondido Mall, Stanford, California 94305 (United States); Levin, Craig S., E-mail: cslevin@stanford.edu [Departments of Radiology, Physics, Bioengineering and Electrical Engineering, Stanford University, 300 Pasteur Dr., Stanford, California 94305-5128 (United States)

    2015-10-15

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  12. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  13. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  14. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  15. 3D weighting in cone beam image reconstruction algorithms: ray-driven vs. pixel-driven.

    Tang, Xiangyang; Nilsen, Roy A; Smolin, Alex; Lifland, Ilya; Samsonov, Dmitry; Taha, Basel

    2008-01-01

    A 3D weighting scheme have been proposed previously to reconstruct images at both helical and axial scans in stat-of-the-art volumetric CT scanners for diagnostic imaging. Such a 3D weighting can be implemented in the manner of either ray-driven or pixel-drive, depending on the available computation resources. An experimental study is conducted in this paper to evaluate the difference between the ray-driven and pixel-driven implementations of the 3D weighting from the perspective of image quality, while their computational complexity is analyzed theoretically. Computer simulated data and several phantoms, such as the helical body phantom and humanoid chest phantom, are employed in the experimental study, showing that both the ray-driven and pixel-driven 3D weighting provides superior image quality for diagnostic imaging in clinical applications. With the availability of image reconstruction engine at increasing computational power, it is believed that the pixel-driven 3D weighting will be dominantly employed in state-of-the-art volumetric CT scanners over clinical applications.

  16. Analysis, Modeling and Dynamic Optimization of 3D Time-of-Flight Imaging Systems

    Schmidt, Mirko

    2011-01-01

    The present thesis is concerned with the optimization of 3D Time-of-Flight (ToF) imaging systems. These novel cameras determine range images by actively illuminating a scene and measuring the time until the backscattered light is detected. Depth maps are constructed from multiple raw images. Usually two of such raw images are acquired simultaneously using special correlating sensors. This thesis covers four main contributions: A physical sensor model is presented which enables the analysis a...

  17. Detection of Connective Tissue Disorders from 3D Aortic MR Images Using Independent Component Analysis

    Hansen, Michael Sass; Zhao, Fei; Zhang, Honghai

    2006-01-01

    A computer-aided diagnosis (CAD) method is reported that allows the objective identification of subjects with connective tissue disorders from 3D aortic MR images using segmentation and independent component analysis (ICA). The first step to extend the model to 4D (3D + time) has also been taken....... ICA is an effective tool for connective tissue disease detection in the presence of sparse data using prior knowledge to order the components, and the components can be inspected visually. 3D+time MR image data sets acquired from 31 normal and connective tissue disorder subjects at end-diastole (R......-wave peak) and at 45\\$\\backslash\\$% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta. The CAD method distinguished between normal and connective tissue disorder subjects with a classification...

  18. Midsagittal plane extraction from brain images based on 3D SIFT.

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-21

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  19. High-speed 3D digital image correlation vibration measurement: Recent advancements and noted limitations

    Beberniss, Timothy J.; Ehrhardt, David A.

    2017-03-01

    A review of the extensive studies on the feasibility and practicality of utilizing high-speed 3 dimensional digital image correlation (3D-DIC) for various random vibration measurement applications is presented. Demonstrated capabilities include finite element model updating utilizing full-field 3D-DIC static displacements, modal survey natural frequencies, damping, and mode shape results from 3D-DIC are baselined against laser Doppler vibrometry (LDV), a comparison between foil strain gage and 3D-DIC strain, and finally the unique application to a high-speed wind tunnel fluid-structure interaction study. Results show good agreement between 3D-DIC and more traditional vibration measurement techniques. Unfortunately, 3D-DIC vibration measurement is not without its limitations, which are also identified and explored in this study. The out-of-plane sensitivity required for vibration measurement for 3D-DIC is orders of magnitude less than LDV making higher frequency displacements difficult to sense. Furthermore, the digital cameras used to capture the DIC images have no filter to eliminate temporal aliasing of the digitized signal. Ultimately DIC is demonstrated as a valid alternative means to measure structural vibrations while one unique application achieves success where more traditional methods would fail.

  20. Single-pixel 3D imaging with time-based depth resolution

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  1. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    Lee, SangYun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified including the mean refractive index, volume, cylinder, and effective radius of individual hairs. In addition, the effects of hydrogen peroxide on individual downy hairs were investigated.

  2. QWIP focal plane array theoretical model of 3-D imaging LADAR system

    El Mashade, Mohamed Bakry; AbouElez, Ahmed Elsayed

    2016-01-01

    The aim of this research is to develop a model for the direct detection three-dimensional (3-D) imaging LADAR system using Quantum Well Infrared Photodetector (QWIP) Focal Plane Array (FPA). This model is employed to study how to add 3-D imaging capability to the existing conventional thermal imaging systems of the same basic form which is sensitive to 3–5 mm (mid-wavelength infrared, MWIR) or 8–12 mm (long-wavelength infrared, LWIR) spectral bands. The integrated signal photoelectrons in cas...

  3. BER Analysis Using Beat Probability Method of 3D Optical CDMA Networks with Double Balanced Detection

    Chih-Ta Yen

    2015-01-01

    Full Text Available This study proposes novel three-dimensional (3D matrices of wavelength/time/spatial code for code-division multiple-access (OCDMA networks, with a double balanced detection mechanism. We construct 3D carrier-hopping prime/modified prime (CHP/MP codes by extending a two-dimensional (2D CHP code integrated with a one-dimensional (1D MP code. The corresponding coder/decoder pairs were based on fiber Bragg gratings (FBGs and tunable optical delay lines integrated with splitters/combiners. System performance was enhanced by the low cross correlation properties of the 3D code designed to avoid the beat noise phenomenon. The CHP/MP code cardinality increased significantly compared to the CHP code under the same bit error rate (BER. The results indicate that the 3D code method can enhance system performance because both the beating terms and multiple-access interference (MAI were reduced by the double balanced detection mechanism. Additionally, the optical component can also be relaxed for high transmission scenery.

  4. Sample Preparation Strategies for Mass Spectrometry Imaging of 3D Cell Culture Models

    Ahlf Wheatcraft, Dorothy R.; Liu, Xin; Hummon, Amanda B.

    2014-01-01

    Three dimensional cell cultures are attractive models for biological research. They combine the flexibility and cost-effectiveness of cell culture with some of the spatial and molecular complexity of tissue. For example, many cell lines form 3D structures given appropriate in vitro conditions. Colon cancer cell lines form 3D cell culture spheroids, in vitro mimics of avascular tumor nodules. While immunohistochemistry and other classical imaging methods are popular for monitoring the distribu...

  5. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  6. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.

  7. 3-D MRI/CT fusion imaging of the lumbar spine

    Yamanaka, Yuki; Kamogawa, Junji; Misaki, Hiroshi; Kamada, Kazuo; Okuda, Shunsuke; Morino, Tadao; Ogata, Tadanori; Yamamoto, Haruyasu [Ehime University, Department of Bone and Joint Surgery, Toon-shi, Ehime (Japan); Katagi, Ryosuke; Kodama, Kazuaki [Katagi Neurological Surgery, Imabari-shi, Ehime (Japan)

    2010-03-15

    The objective was to demonstrate the feasibility of MRI/CT fusion in demonstrating lumbar nerve root compromise. We combined 3-dimensional (3-D) computed tomography (CT) imaging of bone with 3-D magnetic resonance imaging (MRI) of neural architecture (cauda equina and nerve roots) for two patients using VirtualPlace software. Although the pathological condition of nerve roots could not be assessed using MRI, myelography or CT myelography, 3-D MRI/CT fusion imaging enabled unambiguous, 3-D confirmation of the pathological state and courses of nerve roots, both inside and outside the foraminal arch, as well as thickening of the ligamentum flavum and the locations, forms and numbers of dorsal root ganglia. Positional relationships between intervertebral discs or bony spurs and nerve roots could also be depicted. Use of 3-D MRI/CT fusion imaging for the lumbar vertebral region successfully revealed the relationship between bone construction (bones, intervertebral joints, and intervertebral disks) and neural architecture (cauda equina and nerve roots) on a single film, three-dimensionally and in color. Such images may be useful in elucidating complex neurological conditions such as degenerative lumbar scoliosis(DLS), as well as in diagnosis and the planning of minimally invasive surgery. (orig.)

  8. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  9. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  10. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  11. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    Scott Mark

    2005-03-01

    Full Text Available Abstract Background Many three-dimensional (3D images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  12. 3D terahertz synthetic aperture imaging of objects with arbitrary boundaries

    Kniffin, G. P.; Zurk, L. M.; Schecklman, S.; Henry, S. C.

    2013-09-01

    Terahertz (THz) imaging has shown promise for nondestructive evaluation (NDE) of a wide variety of manufactured products including integrated circuits and pharmaceutical tablets. Its ability to penetrate many non-polar dielectrics allows tomographic imaging of an object's 3D structure. In NDE applications, the material properties of the target(s) and background media are often well-known a priori and the objective is to identify the presence and/or 3D location of structures or defects within. The authors' earlier work demonstrated the ability to produce accurate 3D images of conductive targets embedded within a high-density polyethylene (HDPE) background. That work assumed a priori knowledge of the refractive index of the HDPE as well as the physical location of the planar air-HDPE boundary. However, many objects of interest exhibit non-planar interfaces, such as varying degrees of curvature over the extent of the surface. Such irregular boundaries introduce refraction effects and other artifacts that distort 3D tomographic images. In this work, two reconstruction techniques are applied to THz synthetic aperture tomography; a holographic reconstruction method that accurately detects the 3D location of an object's irregular boundaries, and a split­-step Fourier algorithm that corrects the artifacts introduced by the surface irregularities. The methods are demonstrated with measurements from a THz time-domain imaging system.

  13. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  14. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  15. 3D texture analysis in renal cell carcinoma tissue image grading.

    Kim, Tae-Yun; Cho, Nam-Hoon; Jeong, Goo-Bo; Bengtsson, Ewert; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  16. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Tae-Yun Kim

    2014-01-01

    Full Text Available One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  17. Non-contrast enhanced MR venography using 3D fresh blood imaging (FBI). Initial experience

    Yokoyama, Kenichi; Nitatori, Toshiaki; Inaoka, Sayuki; Takahara, Taro; Hachiya, Junichi [Kyorin Univ., Mitaka, Tokyo (Japan). School of Medicine

    2001-10-01

    This study examined the efficacy of 3D-fresh blood imaging (FBI) in patients with venous disease in the iliac region to lower extremity. Fourteen patients with venous disease were examined [8 deep venous thrombosis (DVT) and 6 varix] by 3D-FBI and 2D-TOF MRA. ALL FBI images and 2D-TOF images were evaluated in terms of visualization of the disease and compared with conventional X-ray venography (CV). The total scan time of 3D-FBI ranged from 3 min 24 sec to 4 min 52 sec. 3D-FBI was positive in all 23 anatomical levels in which DVT was diagnosed by CV (100% sensitivity) as well as 2D-TOF. The delineation of collateral veins was superior or equal to that of 2D-TOF. 3D-FBI allowed depiction of varices in five of six cases; however, in one case, the evaluation was limited because the separation of arteries from veins was difficult. The 3D-FBI technique, which allows iliac to peripheral MR venography without contrast medium within a short acquisition time, is considered clinically useful. (author)

  18. Optical properties of 3d-ions in crystals spectroscopy and crystal field analysis

    Brik, Mikhail

    2013-01-01

    "Optical Properties of 3d-Ions in Crystals: Spectroscopy and Crystal Field Analysis" discusses spectral, vibronic and magnetic properties of 3d-ions in a wide range of crystals, used as active media for solid state lasers and potential candidates for this role. Crystal field calculations (including first-principles calculations of energy levels and absorption spectra) and their comparison with experimental spectra, the Jahn-Teller effect, analysis of vibronic spectra, materials science applications are systematically presented. The book is intended for researchers and graduate students in crystal spectroscopy, materials science and optical applications. Dr. N.M. Avram is an Emeritus Professor at the Physics Department, West University of Timisoara, Romania; Dr. M.G. Brik is a Professor at the Institute of Physics, University of Tartu, Estonia.

  19. Design and verification of diffractive optical elements for speckle generation of 3-D range sensors

    Du, Pei-Qin; Shih, Hsi-Fu; Chen, Jenq-Shyong; Wang, Yi-Shiang

    2016-09-01

    The optical projection using speckles is one of the structured light methods that have been applied to three-dimensional (3-D) range sensors. This paper investigates the design and fabrication of diffractive optical elements (DOEs) for generating the light field with uniformly distributed speckles. Based on the principles of computer generated holograms, the iterative Fourier transform algorithm was adopted for the DOE design. It was used to calculate the phase map for diffracting the incident laser beam into a goal pattern with distributed speckles. Four patterns were designed in the study. Their phase maps were first examined by a spatial light modulator and then fabricated on glass substrates by microfabrication processes. Finally, the diffraction characteristics of the fabricated devices were verified. The experimental results show that the proposed methods are applicable to the DOE design of 3-D range sensors. Furthermore, any expected diffraction area and speckle density could be possibly achieved according to the relations presented in the paper.

  20. 3D visualization of the initial Yersinia ruckeri infection route in rainbow trout (Oncorhynchus mykiss) by optical projection tomography

    Otani, Maki; Villumsen, Kasper Rømer; Kragelund Strøm, Helene;

    2014-01-01

    , optical projection tomography (OPT), a novel three-dimensional (3D) bio-imaging technique, was applied. OPT not only enables the visualization of Y. ruckeri on mucosal surfaces but also the 3D spatial distribution in whole organs, without sectioning. Rainbow trout were infected by bath challenge exposure...... as 1 minute post infection. Both OPT and IHC analysis confirmed that the secondary gill lamellae were the only tissues infected at this early time point, indicating that Y. ruckeri initially infects gill epithelial cells. The experimentally induced infection caused septicemia, and Y. ruckeri was found...... trout. Using OPT scanning it was possible to visualize the initial route of entry, as well as secondary infection routes along with the proliferation and spread of Y. ruckeri, ultimately causing significant mortality in the exposed rainbow trout. These results demonstrate that OPT is a state...

  1. 3D laser inspection of fuel assembly grid spacers for nuclear reactors based on diffractive optical elements

    Finogenov, L. V.; Lemeshko, Yu A.; Zav'yalov, P. S.; Chugui, Yu V.

    2007-06-01

    Ensuring the safety and high operation reliability of nuclear reactors takes 100% inspection of geometrical parameters of fuel assemblies, which include the grid spacers performed as a cellular structure with fuel elements. The required grid spacer geometry of assembly in the transverse and longitudinal cross sections is extremely important for maintaining the necessary heat regime. A universal method for 3D grid spacer inspection using a diffractive optical element (DOE), which generates as the structural illumination a multiple-ring pattern on the inner surface of a grid spacer cell, is investigated. Using some DOEs one can inspect the nomenclature of all produced grids. A special objective has been developed for forming the inner surface cell image. The problems of diffractive elements synthesis, projecting optics calculation, adjusting methods as well as calibration of the experimental measuring system are considered. The algorithms for image processing for different constructive elements of grids (cell, channel hole, outer grid spacer rim) and the experimental results are presented.

  2. Optical parametric oscillators in isotropic photonic crystals and cavities: 3D time domain analysis

    Conti, Claudio; Di Falco, Andrea; Assanto, Gaetano

    2004-01-01

    We investigate optical parametric oscillations through four-wave mixing in resonant cavities and photonic crystals. The theoretical analysis underlines the relevant features of the phenomenon and the role of the density of states. Using fully vectorial 3D time-domain simulations, including both dispersion and nonlinear polarization, for the first time we address this process in a face centered cubic lattice and in a photonic crystal slab. The results lead the way to the development of novel p...

  3. 3D printed sensing patches with embedded polymer optical fibre Bragg gratings

    Zubel, Michal G.; Sugden, Kate; Saez-Rodriguez, D.; Nielsen, K.; Bang, O.

    2016-05-01

    The first demonstration of a polymer optical fibre Bragg grating (POFBG) embedded in a 3-D printed structure is reported. Its cyclic strain performance and temperature characteristics are examined and discussed. The sensing patch has a repeatable strain sensitivity of 0.38 pm/μepsilon. Its temperature behaviour is unstable, with temperature sensitivity values varying between 30-40 pm/°C.

  4. Rapid fabrication of complex 3D extracellular microenvironments by dynamic optical projection stereolithography.

    Zhang, A Ping; Qu, Xin; Soman, Pranav; Hribar, Kolin C; Lee, Jin W; Chen, Shaochen; He, Sailing

    2012-08-16

    The topographic features of the extracelluar matrix (ECM) lay the foundation for cellular behavior. A novel biofabrication method using a digital-mirror device (DMD), called dynamic optical projection stereolithography (DOPsL) is demonstrated. This robust and versatile platform can generate complex biomimetic scaffolds within seconds. Such 3D scaffolds have promising potentials for studying cell interactions with microenvironments in vitro and in vivo.

  5. Study of a non-diffusing radiochromic gel dosimeter for 3D radiation dose imaging

    Marsden, Craig Michael

    2000-12-01

    This thesis investigates the potential of a new radiation gel dosimeter, based on nitro-blue tetrazolium (NBTZ) suspended in a gelatin mold. Unlike all Fricke based gel dosimeters this dosimeter does not suffer from diffusive loss of image stability. Images are obtained by an optical tomography method. Nitro blue tetrazolium is a common biological indicator that when irradiated in an aqueous medium undergoes reduction to a highly colored formazan, which has an absorbance maximum at 525nm. Tetrazolium is water soluble while the formazan product is insoluble. The formazan product sticks to the gelatin matrix and the dose image is maintained for three months. Methods to maximize the sensitivity of the system were evaluated. It was found that a chemical detergent, Triton X-100, in combination with sodium formate, increased the dosimeter sensitivity significantly. An initial G-value of formazan production for a dosimeter composed of 1mM NBTZ, gelatin, and water was on the order of 0.2. The addition of Triton and formate produced a G-value in excess of 5.0. The effects of NBTZ, triton, formate, and gel concentration were all investigated. All the gels provided linear dose vs. absorbance plots for doses from 0 to >100 Gy. It was determined that gel concentration had minimal if any effect on sensitivity. Sensitivity increased slightly with increasing NBTZ concentration. Triton and formate individually and together provided moderate to large increases in dosimeter sensitivity. The dosimeter described in this work can provide stable 3D radiation dose images for all modalities of radiation therapy equipment. Methods to increase sensitivity are developed and discussed.

  6. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2011-12-01

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T2-weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  7. A 3D image filter for parameter-free segmentation of macromolecular structures from electron tomograms.

    Rubbiya A Ali

    Full Text Available 3D image reconstruction of large cellular volumes by electron tomography (ET at high (≤ 5 nm resolution can now routinely resolve organellar and compartmental membrane structures, protein coats, cytoskeletal filaments, and macromolecules. However, current image analysis methods for identifying in situ macromolecular structures within the crowded 3D ultrastructural landscape of a cell remain labor-intensive, time-consuming, and prone to user-bias and/or error. This paper demonstrates the development and application of a parameter-free, 3D implementation of the bilateral edge-detection (BLE algorithm for the rapid and accurate segmentation of cellular tomograms. The performance of the 3D BLE filter has been tested on a range of synthetic and real biological data sets and validated against current leading filters-the pseudo 3D recursive and Canny filters. The performance of the 3D BLE filter was found to be comparable to or better than that of both the 3D recursive and Canny filters while offering the significant advantage that it requires no parameter input or optimisation. Edge widths as little as 2 pixels are reproducibly detected with signal intensity and grey scale values as low as 0.72% above the mean of the background noise. The 3D BLE thus provides an efficient method for the automated segmentation of complex cellular structures across multiple scales for further downstream processing, such as cellular annotation and sub-tomogram averaging, and provides a valuable tool for the accurate and high-throughput identification and annotation of 3D structural complexity at the subcellular level, as well as for mapping the spatial and temporal rearrangement of macromolecular assemblies in situ within cellular tomograms.

  8. A 3D image filter for parameter-free segmentation of macromolecular structures from electron tomograms.

    Ali, Rubbiya A; Landsberg, Michael J; Knauth, Emily; Morgan, Garry P; Marsh, Brad J; Hankamer, Ben

    2012-01-01

    3D image reconstruction of large cellular volumes by electron tomography (ET) at high (≤ 5 nm) resolution can now routinely resolve organellar and compartmental membrane structures, protein coats, cytoskeletal filaments, and macromolecules. However, current image analysis methods for identifying in situ macromolecular structures within the crowded 3D ultrastructural landscape of a cell remain labor-intensive, time-consuming, and prone to user-bias and/or error. This paper demonstrates the development and application of a parameter-free, 3D implementation of the bilateral edge-detection (BLE) algorithm for the rapid and accurate segmentation of cellular tomograms. The performance of the 3D BLE filter has been tested on a range of synthetic and real biological data sets and validated against current leading filters-the pseudo 3D recursive and Canny filters. The performance of the 3D BLE filter was found to be comparable to or better than that of both the 3D recursive and Canny filters while offering the significant advantage that it requires no parameter input or optimisation. Edge widths as little as 2 pixels are reproducibly detected with signal intensity and grey scale values as low as 0.72% above the mean of the background noise. The 3D BLE thus provides an efficient method for the automated segmentation of complex cellular structures across multiple scales for further downstream processing, such as cellular annotation and sub-tomogram averaging, and provides a valuable tool for the accurate and high-throughput identification and annotation of 3D structural complexity at the subcellular level, as well as for mapping the spatial and temporal rearrangement of macromolecular assemblies in situ within cellular tomograms.

  9. Improving Segmentation of 3D Retina Layers Based on Graph Theory Approach for Low Quality OCT Images

    Stankiewicz Agnieszka

    2016-06-01

    Full Text Available This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT. Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.

  10. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  11. Audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI

    Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.

    2014-03-01

    The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.

  12. A 3-D tomographic trajectory retrieval for the air-borne limb-imager GLORIA

    J. Ungermann

    2011-06-01

    Full Text Available Infrared limb sounding from aircraft can provide 2-D curtains of multiple trace gas species. However, conventional limb sounders view perpendicular to the aircraft axis and are unable to resolve the observed airmass along their line-of-sight. GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere is a new remote sensing instrument able to adjust its horizontal view angle with respect to the aircraft flight direction from 45° to 135°. This will allow for tomographic measurements of mesoscale structures for a wide variety of atmospheric constituents.

    Many flights of the GLORIA instrument will not follow closed curves that allow measuring an airmass from all directions. Consequently, it is examined by means of simulations, what results can be expected from tomographic evaluation of measurements made during a straight flight. It is demonstrated that the achievable resolution and stability is enhanced compared to conventional retrievals. In a second step, it is shown that the incorporation of channels exhibiting different optical depth can greatly enhance the 3-D retrieval quality enabling the exploitation of previously unused spectral samples.

    A second problem for tomographic retrievals is that advection, which can be neglected for conventional retrievals, plays an important role for the time-scales involved in a tomographic measurement flight. This paper presents a method to diagnose the effect of a time-varying atmosphere on a 3-D retrieval and demonstrates an effective way to compensate for effects of advection by incorporating wind-fields from meteorological datasets as a priori information.

  13. Preliminary clinical application of contrast-enhanced MR angiography using 3D time-resolved imaging of contrast kinetics(3D-TRICKS)

    YANG Chun-shan; LIU Shi-yuan; XIAO Xiang-sheng; FENG Yun; LI Hui-min; XIAO Shan; GONG Wan-qing

    2007-01-01

    Objective: To introduce a new better contrast-enhanced MR angiographic method, named 3D time-resolved imaging of contrast kinetics (3D-TRICKS). Methods: TRICKS is a high temporal resolution (2-6 s) MR angiographic technique using a short TR(4 ms) and TE(1.5 ms), partial echo sampling, in which central part of k-space is updated more frequently than the peripheral part. TRICKS pre-contrast mask 3D images are firstly scanned, and then the bolus injecting of Gd-DTPA, 15-20 sequential 3D images are acquired. The reconstructed 3D images, subtraction of contrast 3D images with mask images, are conceptually similar to a catheter-based intra-arterial digital subtraction angiographic series(DSA). Thirty patients underwent contrast-enhanced MR angiography using 3D-TRICKS. Results: Totally 12 vertebral arteries were well displayed on TRICKS, in which 7 were normal, 1 demonstrated bilateral vertebral artery stenosis, 4 had unilateral vertebral artery stenosis and 1 was accompanied with the same lateral carotid artery bifurcation stenosis. Four cases of bilateral renal arteries were normal, 1 transplanted kidney artery showed as normal and 1 transplanted kidney artery showed stenosis. 2 cerebral arteries were normal, 1 had sagittal sinus thrombosis and 1 displayed intracranial arteriovenous malformation. 3 pulmonary arteries were normal, 1 showed pulmonary artery thrombosis and 1 revealed pulmonary sequestration's abnormal feeding artery and draining vein. One left lower limb fibrolipoma showed feeding artery. One displayed radial-ulnar artery artificial fistula stenosis. One revealed left antebrachium hemangioma. Conclusion: TRICKS can clearly delineate most body vascular system and reveal most vascular abnormality. It possesses convenience and high successful rate, which make it the first choice of displaying most vascular abnormality.

  14. Traceability of Height Measurements on Green Sand Molds using Optical 3D Scanning

    Mohaghegh, K.; Yazdanbakhsh, S.A.; Tiedje, N. S.;

    2016-01-01

    Establishing a reliable measurement procedure for dimensional measurements on green sand molds is a prerequisite for analysis of geometric deviations in mass production of quality castings. Surface of the green sand mold is not suitable for measurements using a tactile coordinate measuring machine....... This paper presents a metrological approach for height measurement on green sand molds using an optical 3D scanner with fringe projection. A new sand sample was developed with a hard binder to withstand the contact force of a touch probe, while keeping optical cooperativeness similar to green sand...

  15. Adaptive Optics Concept For Multi-Objects 3D Spectroscopy on ELTs

    Neichel, B; Puech, M; Conan, J M; Lelouarn, M; Gendron, E; Hammer, F; Rousset, G; Jagourel, P; Bouchet, P

    2005-01-01

    In this paper, we present a first comparison of different Adaptive Optics (AO) concepts to reach a given scientific specification for 3D spectroscopy on Extremely Large Telescope (ELT). We consider that a range of 30%-50% of Ensquarred Energy (EE) in H band (1.65um) and in an aperture size from 25 to 100mas is representative of the scientific requirements. From these preliminary choices, different kinds of AO concepts are investigated : Ground Layer Adaptive Optics (GLAO), Multi-Object AO (MOAO) and Laser Guide Stars AO (LGS). Using Fourier based simulations we study the performance of these AO systems depending on the telescope diameter.

  16. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  17. A web-based 3D medical image collaborative processing system with videoconference

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  18. A virtually imaged defocused array (VIDA) for high-speed 3D microscopy.

    Schonbrun, Ethan; Di Caprio, Giuseppe

    2016-10-01

    We report a method to capture a multifocus image stack based on recording multiple reflections generated by imaging through a custom etalon. The focus stack is collected in a single camera exposure and consequently the information needed for 3D reconstruction is recorded in the camera integration time, which is only 100 µs. We have used the VIDA microscope to temporally resolve the multi-lobed 3D morphology of neutrophil nuclei as they rotate and deform through a microfluidic constriction. In addition, we have constructed a 3D imaging flow cytometer and quantified the nuclear morphology of nearly a thousand white blood cells flowing at a velocity of 3 mm per second. The VIDA microscope is compact and simple to construct, intrinsically achromatic, and the field-of-view and stack number can be easily reconfigured without redesigning diffraction gratings and prisms.

  19. 3-D imaging of particle tracks in solid state nuclear track detectors

    D. Wertheim

    2010-05-01

    Full Text Available It has been suggested that 3 to 5% of total lung cancer deaths in the UK may be associated with elevated radon concentration. Radon gas levels can be assessed using CR-39 plastic detectors which are often assessed by 2-D image analysis of surface images. 3-D analysis has the potential to provide information relating to the angle at which alpha particles impinge on the detector. In this study we used a "LEXT" OLS3100 confocal laser scanning microscope (Olympus Corporation, Tokyo, Japan to image tracks on five CR-39 detectors. We were able to identify several patterns of single and coalescing tracks from 3-D visualisation. Thus this method may provide a means of detailed 3-D analysis of Solid State Nuclear Track Detectors.

  20. 3-D imaging of particle tracks in solid state nuclear track detectors

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2010-05-01

    It has been suggested that 3 to 5% of total lung cancer deaths in the UK may be associated with elevated radon concentration. Radon gas levels can be assessed using CR-39 plastic detectors which are often assessed by 2-D image analysis of surface images. 3-D analysis has the potential to provide information relating to the angle at which alpha particles impinge on the detector. In this study we used a "LEXT" OLS3100 confocal laser scanning microscope (Olympus Corporation, Tokyo, Japan) to image tracks on five CR-39 detectors. We were able to identify several patterns of single and coalescing tracks from 3-D visualisation. Thus this method may provide a means of detailed 3-D analysis of Solid State Nuclear Track Detectors.

  1. Informatics in radiology: Intuitive user interface for 3D image manipulation using augmented reality and a smartphone as a remote control.

    Nakata, Norio; Suzuki, Naoki; Hattori, Asaki; Hirai, Naoya; Miyamoto, Yukio; Fukuda, Kunihiko

    2012-01-01

    Although widely used as a pointing device on personal computers (PCs), the mouse was originally designed for control of two-dimensional (2D) cursor movement and is not suited to complex three-dimensional (3D) image manipulation. Augmented reality (AR) is a field of computer science that involves combining the physical world and an interactive 3D virtual world; it represents a new 3D user interface (UI) paradigm. A system for 3D and four-dimensional (4D) image manipulation has been developed that uses optical tracking AR integrated with a smartphone remote control. The smartphone is placed in a hard case (jacket) with a 2D printed fiducial marker for AR on the back. It is connected to a conventional PC with an embedded Web camera by means of WiFi. The touch screen UI of the smartphone is then used as a remote control for 3D and 4D image manipulation. Using this system, the radiologist can easily manipulate 3D and 4D images from computed tomography and magnetic resonance imaging in an AR environment with high-quality image resolution. Pilot assessment of this system suggests that radiologists will be able to manipulate 3D and 4D images in the reading room in the near future. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.324115086/-/DC1.

  2. Active optical system for advanced 3D surface structuring by laser remelting

    Pütsch, O.; Temmler, A.; Stollenwerk, J.; Willenborg, E.; Loosen, P.

    2015-03-01

    Structuring by laser remelting enables completely new possibilities for designing surfaces since material is redistributed but not wasted. In addition to technological advantages, cost and time benefits yield from shortened process times, the avoidance of harmful chemicals and the elimination of subsequent finishing steps such as cleaning and polishing. The functional principle requires a completely new optical machine technology that maintains the spatial and temporal superposition and manipulation of three different laser beams emitted from two laser sources of different wavelength. The optical system has already been developed and demonstrated for the processing of flat samples of hot and cold working steel. However, since particularly the structuring of 3D-injection molds represents an application example of high innovation potential, the optical system has to take into account the elliptical beam geometry that occurs when the laser beams irradiate a curved surface. To take full advantage of structuring by remelting for the processing of 3D surfaces, additional optical functionality, called EPS (elliptical pre-shaping) has to be integrated into the existing set-up. The development of the beam shaping devices not only requires the analysis of the mechanisms of the beam projection but also a suitable optical design. Both aspects are discussed in this paper.

  3. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  4. 3D Image Reconstruction from X-Ray Measurements with Overlap

    Klodt, Maria

    2016-01-01

    3D image reconstruction from a set of X-ray projections is an important image reconstruction problem, with applications in medical imaging, industrial inspection and airport security. The innovation of X-ray emitter arrays allows for a novel type of X-ray scanners with multiple simultaneously emitting sources. However, two or more sources emitting at the same time can yield measurements from overlapping rays, imposing a new type of image reconstruction problem based on nonlinear constraints. Using traditional linear reconstruction methods, respective scanner geometries have to be implemented such that no rays overlap, which severely restricts the scanner design. We derive a new type of 3D image reconstruction model with nonlinear constraints, based on measurements with overlapping X-rays. Further, we show that the arising optimization problem is partially convex, and present an algorithm to solve it. Experiments show highly improved image reconstruction results from both simulated and real-world measurements.

  5. Accurate positioning for head and neck cancer patients using 2D and 3D image guidance

    Kang, Hyejoo; Lovelock, Dale M.; Yorke, Ellen D.; Kriminiski, Sergey; Lee, Nancy; Amols, Howard I.

    2011-01-01

    Our goal is to determine an optimized image-guided setup by comparing setup errors determined by two-dimensional (2D) and three-dimensional (3D) image guidance for head and neck cancer (HNC) patients immobilized by customized thermoplastic masks. Nine patients received weekly imaging sessions, for a total of 54, throughout treatment. Patients were first set up by matching lasers to surface marks (initial) and then translationally corrected using manual registration of orthogonal kilovoltage (kV) radiographs with DRRs (2D-2D) on bony anatomy. A kV cone beam CT (kVCBCT) was acquired and manually registered to the simulation CT using only translations (3D-3D) on the same bony anatomy to determine further translational corrections. After treatment, a second set of kVCBCT was acquired to assess intrafractional motion. Averaged over all sessions, 2D-2D registration led to translational corrections from initial setup of 3.5 ± 2.2 (range 0–8) mm. The addition of 3D-3D registration resulted in only small incremental adjustment (0.8 ± 1.5 mm). We retrospectively calculated patient setup rotation errors using an automatic rigid-body algorithm with 6 degrees of freedom (DoF) on regions of interest (ROI) of in-field bony anatomy (mainly the C2 vertebral body). Small rotations were determined for most of the imaging sessions; however, occasionally rotations > 3° were observed. The calculated intrafractional motion with automatic registration was < 3.5 mm for eight patients, and < 2° for all patients. We conclude that daily manual 2D-2D registration on radiographs reduces positioning errors for mask-immobilized HNC patients in most cases, and is easily implemented. 3D-3D registration adds little improvement over 2D-2D registration without correcting rotational errors. We also conclude that thermoplastic masks are effective for patient immobilization. PMID:21330971

  6. FEMUR SHAPE RECOVERY FROM VOLUMETRIC IMAGES USING 3-D DEFORMABLE MODELS

    2000-01-01

    A new scheme for femur shape recovery from volumetric images using deformable models was proposed. First, prior 3-D deformable femur models are created as templates using point distribution models technology. Second, active contour models are employed to segment the magnetic resonance imaging (MRI) volumetric images of the tibial and femoral joints and the deformable models are initialized based on the segmentation results. Finally, the objective function is minimized to give the optimal results constraining the surface of shapes.

  7. Measurement of facial soft tissues thickness using 3D computed tomographic images

    Jeong, Ho Gul; Kim, Kee Deog; Shin, Dong Won; Hu, Kyung Seok; Lee, Jae Bum; Park, Hyok; Park, Chang Seo [Yonsei Univ. Hospital, Seoul (Korea, Republic of); Han, Seung Ho [Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2006-03-15

    To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. There were no statistically significant differences between the direct measurements and those using the 3D images(p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissue thickness more easily in forensic science and anthropology.

  8. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope.

  9. Evaluating 3D registration of CT-scan images using crest lines

    Ayache, Nicholas; Gueziec, Andre P.; Thirion, Jean-Philippe; Gourdon, A.; Knoplioch, Jerome

    1993-06-01

    We consider the issue of matching 3D objects extracted from medical images. We show that crest lines computed on the object surfaces correspond to meaningful anatomical features, and that they are stable with respect to rigid transformations. We present the current chain of algorithmic modules which automatically extract the major crest lines in 3D CT-Scan images, and then use differential invariants on these lines to register together the 3D images with a high precision. The extraction of the crest lines is done by computing up to third order derivatives of the image intensity function with appropriate 3D filtering of the volumetric images, and by the 'marching lines' algorithm. The recovered lines are then approximated by splines curves, to compute at each point a number of differential invariants. Matching is finally performed by a new geometric hashing method. The whole chain is now completely automatic, and provides extremely robust and accurate results, even in the presence of severe occlusions. In this paper, we briefly describe the whole chain of processes, already presented to evaluate the accuracy of the approach on a couple of CT-scan images of a skull containing external markers.

  10. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    Karim Hammoudi

    2010-12-01

    Full Text Available This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.

  11. 3D nonrigid medical image registration using a new information theoretic measure

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  12. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Tsap Leonid V

    2006-01-01

    Full Text Available The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  13. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  14. Structured light 3D tracking system for measuring motions in PET brain imaging

    Olesen, Oline Vinter; Jørgensen, Morten Rudkjær; Paulsen, Rasmus Reinhold

    2010-01-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light...... with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure...

  15. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingl...

  16. Acute Bochdalek hernia in an adult:A case report of a 3D image

    Rejeb Imen; Chakroun-Walha Olfa; Ksibi Hichem; Nasri Abdennour; Chtara Kamilia; Chaari Adel; Rekik Noureddine

    2016-01-01

    A 61-year-old male was found to have a bilateral Bochdalek hernia on routine CT during admission for acute respiratory failure. The chest X-ray showed a left paracardiac mass having a diameter of 6 cm. This mass was initially considered as a mediastinal tumor. However, CT scan showed a bilateral large defect of the posteromedial portion of the diaphragm and mesenteric fat. 3D imaging was also useful for the stereographic perception of Bochdalek hernia. Although Bochdalek hernia is not rare, to our knowl-edge, this is the first case of Bochdalek hernia continued transverse colon observed by spiral CT 3D imaging.

  17. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    Giovanna Sansoni

    2009-01-01

    Full Text Available 3D imaging sensors for the acquisition of three dimensional (3D shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  18. A comprehensive evaluation of the PRESAGE/optical-CT 3D dosimetry system

    Sakhalkar, H. S.; Adamovics, J.; Ibbott, G.; Oldham, M. [Department of Radiation Oncology Physics, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Chemistry and Biology, Rider University, Lawrenceville, New Jersey 08648 (United States); Department of Radiation Physics, M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology Physics, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2009-01-15

    This work presents extensive investigations to evaluate the robustness (intradosimeter consistency and temporal stability of response), reproducibility, precision, and accuracy of a relatively new 3D dosimetry system comprising a leuco-dye doped plastic 3D dosimeter (PRESAGE) and a commercial optical-CT scanner (OCTOPUS 5x scanner from MGS Research, Inc). Four identical PRESAGE 3D dosimeters were created such that they were compatible with the Radiologic Physics Center (RPC) head-and-neck (H and N) IMRT credentialing phantom. Each dosimeter was irradiated with a rotationally symmetric arrangement of nine identical small fields (1x3 cm{sup 2}) impinging on the flat circular face of the dosimeter. A repetitious sequence of three dose levels (4, 2.88, and 1.28 Gy) was delivered. The rotationally symmetric treatment resulted in a dose distribution with high spatial variation in axial planes but only gradual variation with depth along the long axis of the dosimeter. The significance of this treatment was that it facilitated accurate film dosimetry in the axial plane, for independent verification. Also, it enabled rigorous evaluation of robustness, reproducibility and accuracy of response, at the three dose levels. The OCTOPUS 5x commercial scanner was used for dose readout from the dosimeters at daily time intervals. The use of improved optics and acquisition technique yielded substantially improved noise characteristics (reduced to {approx}2%) than has been achieved previously. Intradosimeter uniformity of radiochromic response was evaluated by calculating a 3D gamma comparison between each dosimeter and axially rotated copies of the same dosimeter. This convenient technique exploits the rotational symmetry of the distribution. All points in the gamma comparison passed a 2% difference, 1 mm distance-to-agreement criteria indicating excellent intradosimeter uniformity even at low dose levels. Postirradiation, the dosimeters were all found to exhibit a slight increase in

  19. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations.

  20. 3-D reconstruction of neurons from multichannel confocal laser scanning image series.

    Wouterlood, Floris G

    2014-04-10

    A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible.

  1. Application of Medical Imaging Software to 3D Visualization of Astronomical Data

    Borkin, M; Halle, M; Alan, D; Borkin, Michelle; Goodman, Alyssa; Halle, Michael; Alan, Douglas

    2006-01-01

    The AstroMed project at Harvard University's Initiative in Innovative Computing (IIC) is working on improved visualization and data sharing solutions applicable to the fields of both astronomy and medicine. The current focus is on the application of medical imaging visualization and analysis techniques to three-dimensional astronomical data. The 3D Slicer and OsiriX medical imaging tools have been used to make isosurface and volumetric models in RA-DEC-velocity space of the Perseus star forming region from the COMPLETE Survey of Star Forming Region's spectral line maps. 3D Slicer, a brain imaging and visualization computer application developed at Brigham and Women's Hospital's Surgical Planning Lab, is capable of displaying volumes (i.e. data cubes), displaying slices in any direction through the volume, generating 3D isosurface models from the volume which can be viewed and rotated in 3D space, and making 3D models of label maps (for example CLUMPFIND output). OsiriX is able to generate volumetric models fr...

  2. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  3. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  4. Research and Teaching: Methods for Creating and Evaluating 3D Tactile Images to Teach STEM Courses to the Visually Impaired

    Hasper, Eric; Windhorst, Rogier; Hedgpeth, Terri; Van Tuyl, Leanne; Gonzales, Ashleigh; Martinez, Britta; Yu, Hongyu; Farkas, Zolton; Baluch, Debra P.

    2015-01-01

    Project 3D IMAGINE or 3D Image Arrays to Graphically Implement New Education is a pilot study that researches the effectiveness of incorporating 3D tactile images, which are critical for learning science, technology, engineering, and mathematics, into entry-level lab courses. The focus of this project is to increase the participation and…

  5. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  6. Hollow Cone Electron Imaging for Single Particle 3D Reconstruction of Proteins

    Tsai, Chun-Ying; Chang, Yuan-Chih; Lobato, Ivan; van Dyck, Dirk; Chen, Fu-Rong

    2016-06-01

    The main bottlenecks for high-resolution biological imaging in electron microscopy are radiation sensitivity and low contrast. The phase contrast at low spatial frequencies can be enhanced by using a large defocus but this strongly reduces the resolution. Recently, phase plates have been developed to enhance the contrast at small defocus but electrical charging remains a problem. Single particle cryo-electron microscopy is mostly used to minimize the radiation damage and to enhance the resolution of the 3D reconstructions but it requires averaging images of a massive number of individual particles. Here we present a new route to achieve the same goals by hollow cone dark field imaging using thermal diffuse scattered electrons giving about a 4 times contrast increase as compared to bright field imaging. We demonstrate the 3D reconstruction of a stained GroEL particle can yield about 13.5 Å resolution but using a strongly reduced number of images.

  7. Lithographic VCSEL array multimode and single mode sources for sensing and 3D imaging

    Leshin, J.; Li, M.; Beadsworth, J.; Yang, X.; Zhang, Y.; Tucker, F.; Eifert, L.; Deppe, D. G.

    2016-05-01

    Sensing applications along with free space data links can benefit from advanced laser sources that produce novel radiation patterns and tight spectral control for optical filtering. Vertical-cavity surface-emitting lasers (VCSELs) are being developed for these applications. While oxide VCSELs are being produced by most companies, a new type of oxide-free VCSEL is demonstrating many advantages in beam pattern, spectral control, and reliability. These lithographic VCSELs offer increased power density from a given aperture size, and enable dense integration of high efficiency and single mode elements that improve beam pattern. In this paper we present results for lithographic VCSELs and describes integration into military systems for very low cost pulsed applications, as well as continuouswave applications in novel sensing applications. The VCSELs are being developed for U.S. Army for soldier weapon engagement simulation training to improve beam pattern and spectral control. Wavelengths in the 904 nm to 990 nm ranges are being developed with the spectral control designed to eliminate unwanted water absorption bands from the data links. Multiple beams and radiation patterns based on highly compact packages are being investigated for improved target sensing and transmission fidelity in free space data links. These novel features based on the new VCSEL sources are also expected to find applications in 3-D imaging, proximity sensing and motion control, as well as single mode sensors such as atomic clocks and high speed data transmission.

  8. Fast, high-resolution 3D dosimetry utilizing a novel optical-CT scanner incorporating tertiary telecentric collimation.

    Sakhalkar, H S; Oldham, M

    2008-01-01

    This study introduces a charge coupled device (CCD) area detector based optical-computed tomography (optical-CT) scanner for comprehensive verification of radiation dose distributions recorded in nonscattering radiochromic dosimeters. Defining characteristics include: (i) a very fast scanning time of approximately 5 min to acquire a complete three-dimensional (3D) dataset, (ii) improved image formation through the use of custom telecentric optics, which ensures accurate projection images and minimizes artifacts from scattered and stray-light sources, and (iii) high resolution (potentially 50 microm) isotropic 3D dose readout. The performance of the CCD scanner for 3D dose readout was evaluated by comparison with independent 3D readout from the single laser beam OCTOPUS-scanner for the same PRESAGE dosimeters. The OCTOPUS scanner was considered the "gold standard" technique in light of prior studies demonstrating its accuracy. Additional comparisons were made against calculated dose distributions from the ECLIPSE treatment-planning system. Dose readout for the following treatments were investigated: (i) a single rectangular beam irradiation to investigate small field and very steep dose gradient dosimetry away from edge effects, (ii) a 2-field open beam parallel-opposed irradiation to investigate dosimetry along steep dose gradients, and (iii) a 7-field intensity modulated radiation therapy (IMRT) irradiation to investigate dosimetry for complex treatment delivery involving modulation of fluence and for dosimetry along moderate dose gradients. Dose profiles, dose-difference plots, and gamma maps were employed to evaluate quantitative estimates of agreement between independently measured and calculated dose distributions. Results indicated that dose readout from the CCD scanner was in agreement with independent gold-standard readout from the OCTOPUS-scanner as well as the calculated ECLIPSE dose distribution for all treatments, except in regions within a few

  9. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  10. A new way to characterize autostereoscopic 3D displays using Fourier optics instrument

    Boher, P.; Leroux, T.; Bignon, T.; Collomb-Patton, V.

    2009-02-01

    Auto-stereoscopic 3D displays offer presently the most attractive solution for entertainment and media consumption. Despite many studies devoted to this type of technology, efficient characterization methods are still missing. We present here an innovative optical method based on high angular resolution viewing angle measurements with Fourier optics instrument. This type of instrument allows measuring the full viewing angle aperture of the display very rapidly and accurately. The system used in the study presents a very high angular resolution below 0.04 degree which is mandatory for this type of characterization. We can predict from the luminance or color viewing angle measurements of the different views of the 3D display what will be seen by an observer at any position in front of the display. Quality criteria are derived both for 3D and standard properties at any observer position and Qualified Stereo Viewing Space (QSVS) is determined. The use of viewing angle measurements at different locations on the display surface during the observer computation gives more realistic estimation of QSVS and ensures its validity for the entire display surface. Optimum viewing position, viewing freedom, color shifts and standard parameters are also quantified. Simulation of the moire issues can be made leading to a better understanding of their origin.

  11. 3D optical printing of piezoelectric nanoparticle-polymer composite materials.

    Kim, Kanguk; Zhu, Wei; Qu, Xin; Aaronson, Chase; McCall, William R; Chen, Shaochen; Sirbuly, Donald J

    2014-10-28

    Here we demonstrate that efficient piezoelectric nanoparticle-polymer composite materials can be optically printed into three-dimensional (3D) microstructures using digital projection printing. Piezoelectric polymers were fabricated by incorporating barium titanate (BaTiO3, BTO) nanoparticles into photoliable polymer solutions such as polyethylene glycol diacrylate and exposing to digital optical masks that could be dynamically altered to generate user-defined 3D microstructures. To enhance the mechanical-to-electrical conversion efficiency of the composites, the BTO nanoparticles were chemically modified with acrylate surface groups, which formed direct covalent linkages with the polymer matrix under light exposure. The composites with a 10% mass loading of the chemically modified BTO nanoparticles showed piezoelectric coefficients (d(33)) of ∼ 40 pC/N, which were over 10 times larger than composites synthesized with unmodified BTO nanoparticles and over 2 times larger than composites containing unmodified BTO nanoparticles and carbon nanotubes to boost mechanical stress transfer efficiencies. These results not only provide a tool for fabricating 3D piezoelectric polymers but lay the groundwork for creating highly efficient piezoelectric polymer materials via nanointerfacial tuning.

  12. Automated 3D-Objectdocumentation on the Base of an Image Set

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  13. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  14. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  15. Pre-Peak and Post-Peak Rock Strain Characteristics During Uniaxial Compression by 3D Digital Image Correlation

    Munoz, H.; Taheri, A.; Chanda, E. K.

    2016-07-01

    A non-contact optical method for strain measurement applying three-dimensional digital image correlation (3D DIC) in uniaxial compression is presented. A series of monotonic uniaxial compression tests under quasi-static loading conditions on Hawkesbury sandstone specimens were conducted. A prescribed constant lateral-strain rate to control the applied axial load in a closed-loop system allowed capturing the complete stress-strain behaviour of the rock, i.e. the pre-peak and post-peak stress-strain regimes. 3D DIC uses two digital cameras to acquire images of the undeformed and deformed shape of an object to perform image analysis and provides deformation and motion measurements. Observations showed that 3D DIC provides strains free from bedding error in contrast to strains from LVDT. Erroneous measurements due to the compliance of the compressive machine are also eliminated. Furthermore, by 3D DIC technique relatively large strains developed in the post-peak regime, in particular within localised zones, difficult to capture by bonded strain gauges, can be measured in a straight forward manner. Field of strains and eventual strain localisation in the rock surface were analysed by 3D DIC method, coupled with the respective stress levels in the rock. Field strain development in the rock samples, both in axial and shear strain domains suggested that strain localisation takes place progressively and develops at a lower rate in pre-peak regime. It is accelerated, otherwise, in post-peak regime associated with the increasing rate of strength degradation. The results show that a major failure plane, due to strain localisation, becomes noticeable only long after the peak stress took place. In addition, post-peak stress-strain behaviour was observed to be either in a form of localised strain in a shearing zone or inelastic unloading outside of the shearing zone.

  16. High-resolution 3D X-ray imaging of intracranial nitinol stents

    Snoeren, Rudolph M.; With, Peter H.N. de [Eindhoven University of Technology (TU/e), Faculty Electrical Engineering, Signal Processing Systems group (SPS), Eindhoven (Netherlands); Soederman, Michael [Karolinska University Hospital, Department of Neuroradiology, Stockholm (Sweden); Kroon, Johannes N.; Roijers, Ruben B.; Babic, Drazenko [Philips Healthcare, Best (Netherlands)

    2012-02-15

    To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out. Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom. We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI). By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations. (orig.)

  17. Near-infrared chemical imaging (NIR-CI) of 3D printed pharmaceuticals

    Khorasani, Milad; Edinger, Magnus; Raijada, Dhara

    2016-01-01

    Hot-melt extrusion and 3D printing are enabling manufacturing approaches for patient-centred medicinal products. Hot-melt extrusion is a flexible and continuously operating technique which is a crucial part of a typical processing cycle of printed medicines. In this work we use hot-melt extrusion...... for manufacturing of medicinal films containing indomethacin (IND) and polycaprolactone (PCL), extruded strands with nitrofurantoin monohydrate (NFMH) and poly (ethylene oxide) (PEO), and feedstocks for 3D printed dosage forms with nitrofurantoin anhydrate (NFAH), hydroxyapatite (HA) and poly (lactic acid) (PLA......). These feedstocks were printed into a prototype solid dosage form using a desktop 3D printer. These model formulations were characterized using near-infrared chemical imaging (NIR-CI) and, more specifically, the image analytical data were analysed using multivariate curve resolution-alternating least squares (MCR...

  18. A 3-D visualization method for image-guided brain surgery.

    Bourbakis, N G; Awad, M

    2003-01-01

    This paper deals with a 3D methodology for brain tumor image-guided surgery. The methodology is based on development of a visualization process that mimics the human surgeon behavior and decision-making. In particular, it originally constructs a 3D representation of a tumor by using the segmented version of the 2D MRI images. Then it develops an optimal path for the tumor extraction based on minimizing the surgical effort and penetration area. A cost function, incorporated in this process, minimizes the damage surrounding healthy tissues taking into consideration the constraints of a new snake-like surgical tool proposed here. The tumor extraction method presented in this paper is compared with the ordinary method used on brain surgery, which is based on a straight-line based surgical tool. Illustrative examples based on real simulations present the advantages of the 3D methodology proposed here.

  19. Fast isotropic banding-free bSSFP imaging using 3D dynamically phase-cycled radial bSSFP (3D DYPR-SSFP)

    Benkert, Thomas; Blaimer, Martin; Breuer, Felix A. [Research Center Magnetic Resonance Bavaria (MRB), Wuerzburg (Germany); Ehses, Philipp [Tuebingen Univ. (Germany). Dept. of Neuroimaging; Max Planck Institute for Biological Cybernetics, Tuebingen (Germany). High-Field MR Center; Jakob, Peter M. [Research Center Magnetic Resonance Bavaria (MRB), Wuerzburg (Germany); Wuerzburg Univ. (Germany). Dept. of Experimental Physics 5

    2016-05-01

    Aims: Dynamically phase-cycled radial balanced steady-state free precession (DYPR-SSFP) is a method for efficient banding artifact removal in bSSFP imaging. Based on a varying radiofrequency (RF) phase-increment in combination with a radial trajectory, DYPR-SSFP allows obtaining a banding-free image out of a single acquired k-space. The purpose of this work is to present an extension of this technique, enabling fast three-dimensional isotropic banding-free bSSFP imaging. Methods: While banding artifact removal with DYPR-SSFP relies on the applied dynamic phase-cycle, this aspect can lead to artifacts, at least when the number of acquired projections lies below a certain limit. However, by using a 3D radial trajectory with quasi-random view ordering for image acquisition, this problem is intrinsically solved, enabling 3D DYPR-SSFP imaging at or even below the Nyquist criterion. The approach is validated for brain and knee imaging at 3 Tesla. Results: Volumetric, banding-free images were obtained in clinically acceptable scan times with an isotropic resolution up to 0.56 mm. Conclusion: The combination of DYPR-SSFP with a 3D radial trajectory allows banding-free isotropic volumetric bSSFP imaging with no expense of scan time. Therefore, this is a promising candidate for clinical applications such as imaging of cranial nerves or articular cartilage.

  20. Three-Axis Distributed Fiber Optic Strain Measurement in 3D Woven Composite Structures

    Castellucci, Matt; Klute, Sandra; Lally, Evan M.; Froggatt, Mark E.; Lowry, David

    2013-01-01

    Recent advancements in composite materials technologies have broken further from traditional designs and require advanced instrumentation and analysis capabilities. Success or failure is highly dependent on design analysis and manufacturing processes. By monitoring smart structures throughout manufacturing and service life, residual and operational stresses can be assessed and structural integrity maintained. Composite smart structures can be manufactured by integrating fiber optic sensors into existing composite materials processes such as ply layup, filament winding and three-dimensional weaving. In this work optical fiber was integrated into 3D woven composite parts at a commercial woven products manufacturing facility. The fiber was then used to monitor the structures during a VARTM manufacturing process, and subsequent static and dynamic testing. Low cost telecommunications-grade optical fiber acts as the sensor using a high resolution commercial Optical Frequency Domain Reflectometer (OFDR) system providing distributed strain measurement at spatial resolutions as low as 2mm. Strain measurements using the optical fiber sensors are correlated to resistive strain gage measurements during static structural loading. Keywords: fiber optic, distributed strain sensing, Rayleigh scatter, optical frequency domain reflectometry

  1. Measuring Femoral Torsion In Vivo Using Freehand 3-D Ultrasound Imaging.

    Passmore, Elyse; Pandy, Marcus G; Graham, H Kerr; Sangeux, Morgan

    2016-02-01

    Despite variation in bone geometry, muscle and joint function is often investigated using generic musculoskeletal models. Patient-specific bone geometry can be obtained from computerised tomography, which involves ionising radiation, or magnetic resonance imaging (MRI), which is costly and time consuming. Freehand 3-D ultrasound provides an alternative to obtain bony geometry. The purpose of this study was to determine the accuracy and repeatability of 3-D ultrasound in measuring femoral torsion. Measurements of femoral torsion were performed on 10 healthy adults using MRI and 3-D ultrasound. Measurements of femoral torsion from 3-D ultrasound were, on average, smaller than those from MRI (mean difference = 1.8°; 95% confidence interval: -3.9°, 7.5°). MRI and 3-D ultrasound had Bland and Altman repeatability coefficients of 3.1° and 3.7°, respectively. Accurate measurements of femoral torsion were obtained with 3-D ultrasound offering the potential to acquire patient-specific bone geometry for musculoskeletal modelling. Three-dimensional ultrasound is non-invasive and relatively inexpensive and can be integrated into gait analysis.

  2. 3D space perception as embodied cognition in the history of art images

    Tyler, Christopher W.

    2014-02-01

    Embodied cognition is a concept that provides a deeper understanding of the aesthetics of art images. This study considers the role of embodied cognition in the appreciation of 3D pictorial space, 4D action space, its extension through mirror reflection to embodied self-­-cognition, and its relation to the neuroanatomical organization of the aesthetic response.

  3. Estimating 3D Object Parameters from 2D Grey-Level Images

    Houkes, Zweitze

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts im