WorldWideScience

Sample records for 3d optical imaging

  1. 3D integral imaging with optical processing

    Science.gov (United States)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  2. Multiplane 3D superresolution optical fluctuation imaging

    CERN Document Server

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  3. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  4. Progresses in 3D integral imaging with optical processing

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Corral, Manuel; Martinez-Cuenca, Raul; Saavedra, Genaro; Navarro, Hector; Pons, Amparo [Department of Optics. University of Valencia. Calle Doctor Moliner 50, E46 100, Burjassot (Spain); Javidi, Bahram [Electrical and Computer Engineering Department, University of Connecticut, Storrs, CT 06269-1157 (United States)], E-mail: manuel.martinez@uv.es

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  5. Optical-CT imaging of complex 3D dose distributions

    Science.gov (United States)

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  6. Confocal Image 3D Surface Measurement with Optical Fiber Plate

    Institute of Scientific and Technical Information of China (English)

    WANG Zhao; ZHU Sheng-cheng; LI Bing; TAN Yu-shan

    2004-01-01

    A whole-field 3D surface measurement system for semiconductor wafer inspection is described.The system consists of an optical fiber plate,which can split the light beam into N2 subbeams to realize the whole-field inspection.A special prism is used to separate the illumination light and signal light.This setup is characterized by high precision,high speed and simple structure.

  7. 3D tomographic breast imaging in-vivo using a handheld optical imager

    Science.gov (United States)

    Erickson, Sarah J.; Martinez, Sergio; Gonzalez, Jean; Roman, Manuela; Nunez, Annie; Godavarty, Anuradha

    2011-02-01

    Hand-held optical imagers are currently developed toward clinical imaging of breast tissue. However, the hand-held optical devices developed to are not able to coregister the image to the tissue geometry for 3D tomography. We have developed a hand-held optical imager which has demonstrated automated coregistered imaging and 3D tomography in phantoms, and validated coregistered imaging in normal human subjects. Herein, automated coregistered imaging is performed in a normal human subject with a 0.45 cm3 spherical target filled with 1 μM indocyanine green (fluorescent contrast agent) placed superficially underneath the flap of the breast tissue. The coregistered image data is used in an approximate extended Kalman filter (AEKF) based reconstruction algorithm to recover the 3D location of the target within the breast tissue geometry. The results demonstrate the feasibility of performing 3D tomographic imaging and recovering a fluorescent target in breast tissue of a human subject for the first time using a hand-held based optical imager. The significance of this work is toward clinical imaging of breast tissue for cancer diagnostics and therapy monitoring.

  8. Cytology 3D structure formation based on optical microscopy images

    Science.gov (United States)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  9. Joint Applied Optics and Chinese Optics Letters Feature Introduction: Digital Holography and 3D Imaging

    Institute of Scientific and Technical Information of China (English)

    Ting-Chung Poon; Changhe Zhou; Toyohiko Yatagai; Byoungho Lee; Hongchen Zhai

    2011-01-01

    This feature issue is the fifth installment on digital holography since its inception four years ago.The last four issues have been published after the conclusion of each Topical Meeting "Digital Holography and 3D imaging (DH)." However,this feature issue includes a new key feature-Joint Applied Optics and Chinese Optics Letters Feature Issue.The DH Topical Meeting is the world's premier forum for disseminating the science and technology geared towards digital holography and 3D information processing.Since the meeting's inception in 2007,it has steadily and healthily grown to 130 presentations this year,held in Tokyo,Japan,May 2011.

  10. 3D photoacoustic imaging

    Science.gov (United States)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  11. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  12. Building 3D aerial image in photoresist with reconstructed mask image acquired with optical microscope

    Science.gov (United States)

    Chou, C. S.; Tang, Y. P.; Chu, F. S.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2012-03-01

    Calibration of mask images on wafer becomes more important as features shrink. Two major types of metrology have been commonly adopted. One is to measure the mask image with scanning electron microscope (SEM) to obtain the contours on mask and then simulate the wafer image with optical simulator. The other is to use an optical imaging tool Aerial Image Measurement System (AIMSTM) to emulate the image on wafer. However, the SEM method is indirect. It just gathers planar contours on a mask with no consideration of optical characteristics such as 3D topography structures. Hence, the image on wafer is not predicted precisely. Though the AIMSTM method can be used to directly measure the intensity at the near field of a mask but the image measured this way is not quite the same as that on the wafer due to reflections and refractions in the films on wafer. Here, a new approach is proposed to emulate the image on wafer more precisely. The behavior of plane waves with different oblique angles is well known inside and between planar film stacks. In an optical microscope imaging system, plane waves can be extracted from the pupil plane with a coherent point source of illumination. Once plane waves with a specific coherent illumination are analyzed, the partially coherent component of waves could be reconstructed with a proper transfer function, which includes lens aberration, polarization, reflection and refraction in films. It is a new method that we can transfer near light field of a mask into an image on wafer without the disadvantages of indirect SEM measurement such as neglecting effects of mask topography, reflections and refractions in the wafer film stacks. Furthermore, with this precise latent image, a separated resist model also becomes more achievable.

  13. Deformation analysis of 3D tagged cardiac images using an optical flow method

    Directory of Open Access Journals (Sweden)

    Gorman Robert C

    2010-03-01

    Full Text Available Abstract Background This study proposes and validates a method of measuring 3D strain in myocardium using a 3D Cardiovascular Magnetic Resonance (CMR tissue-tagging sequence and a 3D optical flow method (OFM. Methods Initially, a 3D tag MR sequence was developed and the parameters of the sequence and 3D OFM were optimized using phantom images with simulated deformation. This method then was validated in-vivo and utilized to quantify normal sheep left ventricular functions. Results Optimizing imaging and OFM parameters in the phantom study produced sub-pixel root-mean square error (RMS between the estimated and known displacements in the x (RMSx = 0.62 pixels (0.43 mm, y (RMSy = 0.64 pixels (0.45 mm and z (RMSz = 0.68 pixels (1 mm direction, respectively. In-vivo validation demonstrated excellent correlation between the displacement measured by manually tracking tag intersections and that generated by 3D OFM (R ≥ 0.98. Technique performance was maintained even with 20% Gaussian noise added to the phantom images. Furthermore, 3D tracking of 3D cardiac motions resulted in a 51% decrease in in-plane tracking error as compared to 2D tracking. The in-vivo function studies showed that maximum wall thickening was greatest in the lateral wall, and increased from both apex and base towards the mid-ventricular region. Regional deformation patterns are in agreement with previous studies on LV function. Conclusion A novel method was developed to measure 3D LV wall deformation rapidly with high in-plane and through-plane resolution from one 3D cine acquisition.

  14. Single Camera 3-D Coordinate Measuring System Based on Optical Probe Imaging

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new vision coordinate measuring system——single camera 3-D coordinate measuring system based on optical probe imaging is presented. A new idea in vision coordinate measurement is proposed. A linear model is deduced which can distinguish six freedom degrees of optical probe to realize coordinate measurement of the object surface. The effects of some factors on the resolution of the system are analyzed. The simulating experiments have shown that the system model is available.

  15. A 3D integral imaging optical see-through head-mounted display.

    Science.gov (United States)

    Hua, Hong; Javidi, Bahram

    2014-06-02

    An optical see-through head-mounted display (OST-HMD), which enables optical superposition of digital information onto the direct view of the physical world and maintains see-through vision to the real world, is a vital component in an augmented reality (AR) system. A key limitation of the state-of-the-art OST-HMD technology is the well-known accommodation-convergence mismatch problem caused by the fact that the image source in most of the existing AR displays is a 2D flat surface located at a fixed distance from the eye. In this paper, we present an innovative approach to OST-HMD designs by combining the recent advancement of freeform optical technology and microscopic integral imaging (micro-InI) method. A micro-InI unit creates a 3D image source for HMD viewing optics, instead of a typical 2D display surface, by reconstructing a miniature 3D scene from a large number of perspective images of the scene. By taking advantage of the emerging freeform optical technology, our approach will result in compact, lightweight, goggle-style AR display that is potentially less vulnerable to the accommodation-convergence discrepancy problem and visual fatigue. A proof-of-concept prototype system is demonstrated, which offers a goggle-like compact form factor, non-obstructive see-through field of view, and true 3D virtual display.

  16. Quantification of smoothing requirement for 3D optic flow calculation of volumetric images

    DEFF Research Database (Denmark)

    Bab-Hadiashar, Alireza; Tennakoon, Ruwan B.; de Bruijne, Marleen

    2013-01-01

    that a (surprisingly) small amount of local smoothing is required to satisfy both the necessary and sufficient conditions for accurate optic flow estimation. This notion is called 'just enough' smoothing, and its proper implementation has a profound effect on the preservation of local information in processing 3D...... dynamic scans. To demonstrate the effect of 'just enough' smoothing, a robust 3D optic flow method with quantized local smoothing is presented, and the effect of local smoothing on the accuracy of motion estimation in dynamic lung CT images is examined using both synthetic and real image sequences......Complexities of dynamic volumetric imaging challenge the available computer vision techniques on a number of different fronts. This paper examines the relationship between the estimation accuracy and required amount of smoothness for a general solution from a robust statistics perspective. We show...

  17. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    Energy Technology Data Exchange (ETDEWEB)

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  18. Development of scanning laser sensor for underwater 3D imaging with the coaxial optics

    Science.gov (United States)

    Ochimizu, Hideaki; Imaki, Masaharu; Kameyama, Shumpei; Saito, Takashi; Ishibashi, Shoujirou; Yoshida, Hiroshi

    2014-06-01

    We have developed the scanning laser sensor for underwater 3-D imaging which has the wide scanning angle of 120º (Horizontal) x 30º (Vertical) with the compact size of 25 cm diameter and 60 cm long. Our system has a dome lens and a coaxial optics to realize both the wide scanning angle and the compactness. The system also has the feature in the sensitivity time control (STC) circuit, in which the receiving gain is increased according to the time of flight. The STC circuit contributes to detect a small signal by suppressing the unwanted signals backscattered by marine snows. We demonstrated the system performance in the pool, and confirmed the 3-D imaging with the distance of 20 m. Furthermore, the system was mounted on the autonomous underwater vehicle (AUV), and demonstrated the seafloor mapping at the depth of 100 m in the ocean.

  19. Pico-projector-based optical sectioning microscopy for 3D chlorophyll fluorescence imaging of mesophyll cells

    Science.gov (United States)

    Chen, Szu-Yu; Hsu, Yu John; Yeh, Chia-Hua; Chen, S.-Wei; Chung, Chien-Han

    2015-03-01

    A pico-projector-based optical sectioning microscope (POSM) was constructed using a pico-projector to generate structured illumination patterns. A net rate of 5.8 × 106 pixel/s and sub-micron spatial resolution in three-dimensions (3D) were achieved. Based on the pico-projector’s flexibility in pattern generation, the characteristics of POSM with different modulation periods and at different imaging depths were measured and discussed. With the application of different modulation periods, 3D chlorophyll fluorescence imaging of mesophyll cells was carried out in freshly plucked leaves of four species without sectioning or staining. For each leaf, an average penetration depth of 120 μm was achieved. Increasing the modulation period along with the increment of imaging depth, optical sectioning images can be obtained with a compromise between the axial resolution and signal-to-noise ratio. After ∼30 min imaging on the same area, photodamage was hardly observed. Taking the advantages of high speed and low damages of POSM, the investigation of the dynamic fluorescence responses to temperature changes was performed under three different treatment temperatures. The three embedded blue, green and red light-emitting diode light sources were applied to observe the responses of the leaves with different wavelength excitation.

  20. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    Science.gov (United States)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  1. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images.

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N; Zangwill, Linda M

    2014-03-18

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  2. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  3. A 3D approach to reconstruct continuous optical images using lidar and MODIS

    Institute of Scientific and Technical Information of China (English)

    HuaGuo; Huang; Jun; Lian

    2015-01-01

    Background: Monitoring forest health and biomass for changes over time in the global environment requires the provision of continuous satellite images. However, optical images of land surfaces are generally contaminated when clouds are present or rain occurs.Methods: To estimate the actual reflectance of land surfaces masked by clouds and potential rain, 3D simulations by the RAPID radiative transfer model were proposed and conducted on a forest farm dominated by birch and larch in Genhe City, Da Xing’An Ling Mountain in Inner Mongolia, China. The canopy height model(CHM) from lidar data were used to extract individual tree structures(location, height, crown width). Field measurements related tree height to diameter of breast height(DBH), lowest branch height and leaf area index(LAI). Series of Landsat images were used to classify tree species and land cover. MODIS LAI products were used to estimate the LAI of individual trees. Combining all these input variables to drive RAPID, high-resolution optical remote sensing images were simulated and validated with available satellite images.Results: Evaluations on spatial texture, spectral values and directional reflectance were conducted to show comparable results.Conclusions: The study provides a proof-of-concept approach to link lidar and MODIS data in the parameterization of RAPID models for high temporal and spatial resolutions of image reconstruction in forest dominated areas.

  4. A 3D approach to reconstruct continuous optical images using lidar and MODIS

    Directory of Open Access Journals (Sweden)

    HuaGuo Huang

    2015-06-01

    Full Text Available Background Monitoring forest health and biomass for changes over time in the global environment requires the provision of continuous satellite images. However, optical images of land surfaces are generally contaminated when clouds are present or rain occurs. Methods To estimate the actual reflectance of land surfaces masked by clouds and potential rain, 3D simulations by the RAPID radiative transfer model were proposed and conducted on a forest farm dominated by birch and larch in Genhe City, DaXing’AnLing Mountain in Inner Mongolia, China. The canopy height model (CHM from lidar data were used to extract individual tree structures (location, height, crown width. Field measurements related tree height to diameter of breast height (DBH, lowest branch height and leaf area index (LAI. Series of Landsat images were used to classify tree species and land cover. MODIS LAI products were used to estimate the LAI of individual trees. Combining all these input variables to drive RAPID, high-resolution optical remote sensing images were simulated and validated with available satellite images. Results Evaluations on spatial texture, spectral values and directional reflectance were conducted to show comparable results. Conclusions The study provides a proof-of-concept approach to link lidar and MODIS data in the parameterization of RAPID models for high temporal and spatial resolutions of image reconstruction in forest dominated areas.

  5. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    Directory of Open Access Journals (Sweden)

    Paoli Alessandro

    2011-02-01

    Full Text Available Abstract Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast and preoperative (radiographic template models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology.

  6. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  7. Analytical models of icosahedral shells for 3D optical imaging of viruses

    CERN Document Server

    Jafarpour, Aliakbar

    2014-01-01

    A modulated icosahedral shell with an inclusion is a concise description of many viruses, including recently-discovered large double-stranded DNA ones. Many X-ray scattering patterns of such viruses show major polygonal fringes, which can be reproduced in image reconstruction with a homogeneous icosahedral shell. A key question regarding a low-resolution reconstruction is how to introduce further changes to the 3D profile in an efficient way with only a few parameters. Here, we derive and compile different analytical models of such an object with consideration of practical optical setups and typical structures of such viruses. The benefits of such models include 1) inherent filtering and suppressing different numerical errors of a discrete grid, 2) providing a concise and meaningful set of descriptors for feature extraction in high-throughput classification/sorting and higher-resolution cumulative reconstructions, 3) disentangling (physical) resolution from (numerical) discretization step and having a vector ...

  8. Editorial: 3DIM-DS 2015: Optical image processing in the context of 3D imaging, metrology, and data security

    Science.gov (United States)

    Alfalou, Ayman

    2017-02-01

    Following the first International Symposium on 3D Imaging, Metrology, and Data Security (3DIM-DS) held in Shenzhen during september 2015, this special issue gathers a series of articles dealing with the main topics discussed during this symposium. These topics highlighted the importance of studying complex data treatment systems and intensive calculations designed for high dimensional imaging and metrology for which high image quality and high transmission speed become critical issues in a number of technological applications. A second purpose was to celebrate the International Year of Light by emphasizing the important role of optics in actual information processing systems.

  9. 3D Curvelet-Based Segmentation and Quantification of Drusen in Optical Coherence Tomography Images

    Directory of Open Access Journals (Sweden)

    M. Esmaeili

    2017-01-01

    Full Text Available Spectral-Domain Optical Coherence Tomography (SD-OCT is a widely used interferometric diagnostic technique in ophthalmology that provides novel in vivo information of depth-resolved inner and outer retinal structures. This imaging modality can assist clinicians in monitoring the progression of Age-related Macular Degeneration (AMD by providing high-resolution visualization of drusen. Quantitative tools for assessing drusen volume that are indicative of AMD progression may lead to appropriate metrics for selecting treatment protocols. To address this need, a fully automated algorithm was developed to segment drusen area and volume from SD-OCT images. The proposed algorithm consists of three parts: (1 preprocessing, which includes creating binary mask and removing possible highly reflective posterior hyaloid that is used in accurate detection of inner segment/outer segment (IS/OS junction layer and Bruch’s membrane (BM retinal layers; (2 coarse segmentation, in which 3D curvelet transform and graph theory are employed to get the possible candidate drusenoid regions; (3 fine segmentation, in which morphological operators are used to remove falsely extracted elongated structures and get the refined segmentation results. The proposed method was evaluated in 20 publically available volumetric scans acquired by using Bioptigen spectral-domain ophthalmic imaging system. The average true positive and false positive volume fractions (TPVF and FPVF for the segmentation of drusenoid regions were found to be 89.15% ± 3.76 and 0.17% ± .18%, respectively.

  10. Multimodal photoacoustic and optical coherence tomography scanner using an all optical detection scheme for 3D morphological skin imaging.

    Science.gov (United States)

    Zhang, Edward Z; Povazay, Boris; Laufer, Jan; Alex, Aneesh; Hofer, Bernd; Pedley, Barbara; Glittenberg, Carl; Treeby, Bradley; Cox, Ben; Beard, Paul; Drexler, Wolfgang

    2011-08-01

    A noninvasive, multimodal photoacoustic and optical coherence tomography (PAT/OCT) scanner for three-dimensional in vivo (3D) skin imaging is described. The system employs an integrated, all optical detection scheme for both modalities in backward mode utilizing a shared 2D optical scanner with a field-of-view of ~13 × 13 mm(2). The photoacoustic waves were detected using a Fabry Perot polymer film ultrasound sensor placed on the surface of the skin. The sensor is transparent in the spectral range 590-1200 nm. This permits the photoacoustic excitation beam (670-680 nm) and the OCT probe beam (1050 nm) to be transmitted through the sensor head and into the underlying tissue thus providing a backward mode imaging configuration. The respective OCT and PAT axial resolutions were 8 and 20 µm and the lateral resolutions were 18 and 50-100 µm. The system provides greater penetration depth than previous combined PA/OCT devices due to the longer wavelength of the OCT beam (1050 nm rather than 829-870 nm) and by operating in the tomographic rather than the optical resolution mode of photoacoustic imaging. Three-dimensional in vivo images of the vasculature and the surrounding tissue micro-morphology in murine and human skin were acquired. These studies demonstrated the complementary contrast and tissue information provided by each modality for high-resolution 3D imaging of vascular structures to depths of up to 5 mm. Potential applications include characterizing skin conditions such as tumors, vascular lesions, soft tissue damage such as burns and wounds, inflammatory conditions such as dermatitis and other superficial tissue abnormalities.

  11. 3D automatic quantification applied to optically sectioned images to improve microscopy analysis

    Directory of Open Access Journals (Sweden)

    JE Diaz-Zamboni

    2009-08-01

    Full Text Available New fluorescence microscopy techniques, such as confocal or digital deconvolution microscopy, allow to easily obtain three-dimensional (3D information from specimens. However, there are few 3D quantification tools that allow extracting information of these volumes. Therefore, the amount of information acquired by these techniques is difficult to manipulate and analyze manually. The present study describes a model-based method, which for the first time shows 3D visualization and quantification of fluorescent apoptotic body signals, from optical serial sections of porcine hepatocyte spheroids correlating them to their morphological structures. The method consists on an algorithm that counts apoptotic bodies in a spheroid structure and extracts information from them, such as their centroids in cartesian and radial coordinates, relative to the spheroid centre, and their integrated intensity. 3D visualization of the extracted information, allowed us to quantify the distribution of apoptotic bodies in three different zones of the spheroid.

  12. Combining 3D optical imaging and dual energy absorptiometry to measure three compositional components

    Science.gov (United States)

    Malkov, Serghei; Shepherd, John

    2014-02-01

    We report on the design of the technique combining 3D optical imaging and dual-energy absorptiometry body scanning to estimate local body area compositions of three compartments. Dual-energy attenuation and body shape measures are used together to solve for the three compositional tissue thicknesses: water, lipid, and protein. We designed phantoms with tissue-like properties as our reference standards for calibration purposes. The calibration was created by fitting phantom values using non-linear regression of quadratic and truncated polynomials. Dual-energy measurements were performed on tissue-mimicking phantoms using a bone densitometer unit. The phantoms were made of materials shown to have similar x-ray attenuation properties of the biological compositional compartments. The components for the solid phantom were tested and their high energy/low energy attenuation ratios are in good correspondent to water, lipid, and protein for the densitometer x-ray region. The three-dimensional body shape was reconstructed from the depth maps generated by Microsoft Kinect for Windows. We used open-source Point Cloud Library and freeware software to produce dense point clouds. Accuracy and precision of compositional and thickness measures were calculated. The error contributions due to two modalities were estimated. The preliminary phantom composition and shape measurements are found to demonstrate the feasibility of the method proposed.

  13. Multicolor 3D super-resolution imaging by quantum dot stochastic optical reconstruction microscopy.

    Science.gov (United States)

    Xu, Jianquan; Tehrani, Kayvan F; Kner, Peter

    2015-03-24

    We demonstrate multicolor three-dimensional super-resolution imaging with quantum dots (QSTORM). By combining quantum dot asynchronous spectral blueing with stochastic optical reconstruction microscopy and adaptive optics, we achieve three-dimensional imaging with 24 nm lateral and 37 nm axial resolution. By pairing two short-pass filters with two appropriate quantum dots, we are able to image single blueing quantum dots on two channels simultaneously, enabling multicolor imaging with high photon counts.

  14. 3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy

    KAUST Repository

    Zhang, Yibo

    2017-08-12

    High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings.

  15. High resolution 3D imaging of living cells with sub-optical wavelength phonons

    Science.gov (United States)

    Pérez-Cota, Fernando; Smith, Richard J.; Moradi, Emilia; Marques, Leonel; Webb, Kevin F.; Clark, Matt

    2016-12-01

    Label-free imaging of living cells below the optical diffraction limit poses great challenges for optical microscopy. Biologically relevant structural information remains below the Rayleigh limit and beyond the reach of conventional microscopes. Super-resolution techniques are typically based on the non-linear and stochastic response of fluorescent labels which can be toxic and interfere with cell function. In this paper we present, for the first time, imaging of live cells using sub-optical wavelength phonons. The axial imaging resolution of our system is determined by the acoustic wavelength (λa = λprobe/2n) and not on the NA of the optics allowing sub-optical wavelength acoustic sectioning of samples using the time of flight. The transverse resolution is currently limited to the optical spot size. The contrast mechanism is significantly determined by the mechanical properties of the cells and requires no additional contrast agent, stain or label to image the cell structure. The ability to breach the optical diffraction limit to image living cells acoustically promises to bring a new suite of imaging technologies to bear in answering exigent questions in cell biology and biomedicine.

  16. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    to produce high quality 3-D images. Because of the large matrix transducers with integrated custom electronics, these systems are extremely expensive. The relatively low price of ultrasound scanners is one of the factors for the widespread use of ultrasound imaging. The high price tag on the high quality 3-D......The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...

  17. Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI).

    Science.gov (United States)

    Dertinger, T; Colyer, R; Iyer, G; Weiss, S; Enderlein, J

    2009-12-29

    Super-resolution optical microscopy is a rapidly evolving area of fluorescence microscopy with a tremendous potential for impacting many fields of science. Several super-resolution methods have been developed over the last decade, all capable of overcoming the fundamental diffraction limit of light. We present here an approach for obtaining subdiffraction limit optical resolution in all three dimensions. This method relies on higher-order statistical analysis of temporal fluctuations (caused by fluorescence blinking/intermittency) recorded in a sequence of images (movie). We demonstrate a 5-fold improvement in spatial resolution by using a conventional wide-field microscope. This resolution enhancement is achieved in iterative discrete steps, which in turn allows the evaluation of images at different resolution levels. Even at the lowest level of resolution enhancement, our method features significant background reduction and thus contrast enhancement and is demonstrated on quantum dot-labeled microtubules of fibroblast cells.

  18. Optically clearing tissue as an initial step for 3D imaging of core biopsies to diagnose pancreatic cancer

    Science.gov (United States)

    Das, Ronnie; Agrawal, Aishwarya; Upton, Melissa P.; Seibel, Eric J.

    2014-02-01

    The pancreas is a deeply seated organ requiring endoscopically, or radiologically guided biopsies for tissue diagnosis. Current approaches include either fine needle aspiration biopsy (FNA) for cytologic evaluation, or core needle biopsies (CBs), which comprise of tissue cores (L = 1-2 cm, D = 0.4-2.0 mm) for examination by brightfield microscopy. Between procurement and visualization, biospecimens must be processed, sectioned and mounted on glass slides for 2D visualization. Optical information about the native tissue state can be lost with each procedural step and a pathologist cannot appreciate 3D organization from 2D observations of tissue sections 1-8 μm in thickness. Therefore, how might histological disease assessment improve if entire, intact CBs could be imaged in both brightfield and 3D? CBs are mechanically delicate; therefore, a simple device was made to cut intact, simulated CBs (L = 1-2 cm, D = 0.2-0.8 mm) from porcine pancreas. After CBs were laid flat in a chamber, z-stack images at 20x and 40x were acquired through the sample with and without the application of an optical clearing agent (FocusClear®). Intensity of transmitted light increased by 5-15x and islet structures unique to pancreas were clearly visualized 250-300 μm beneath the tissue surface. CBs were then placed in index matching square capillary tubes filled with FocusClear® and a standard optical clearing agent. Brightfield z-stack images were then acquired to present 3D visualization of the CB to the pathologist.

  19. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...

  20. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    studies and in vivo. Phantom measurements are compared with their corresponding reference value, whereas the in vivo measurement is validated against the current golden standard for non-invasive blood velocity estimates, based on magnetic resonance imaging (MRI). The study concludes, that a high precision......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges......For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...

  1. Towards a Noninvasive Intracranial Tumor Irradiation Using 3D Optical Imaging and Multimodal Data Registration

    Science.gov (United States)

    Posada, R.; Daul, Ch.; Wolf, D.; Aletti, P.

    2007-01-01

    Conformal radiotherapy (CRT) results in high-precision tumor volume irradiation. In fractioned radiotherapy (FRT), lesions are irradiated in several sessions so that healthy neighbouring tissues are better preserved than when treatment is carried out in one fraction. In the case of intracranial tumors, classical methods of patient positioning in the irradiation machine coordinate system are invasive and only allow for CRT in one irradiation session. This contribution presents a noninvasive positioning method representing a first step towards the combination of CRT and FRT. The 3D data used for the positioning is point clouds spread over the patient's head (CT-data usually acquired during treatment) and points distributed over the patient's face which are acquired with a structured light sensor fixed in the therapy room. The geometrical transformation linking the coordinate systems of the diagnosis device (CT-modality) and the 3D sensor of the therapy room (visible light modality) is obtained by registering the surfaces represented by the two 3D point sets. The geometrical relationship between the coordinate systems of the 3D sensor and the irradiation machine is given by a calibration of the sensor position in the therapy room. The global transformation, computed with the two previous transformations, is sufficient to predict the tumor position in the irradiation machine coordinate system with only the corresponding position in the CT-coordinate system. Results obtained for a phantom show that the mean positioning error of tumors on the treatment machine isocentre is 0.4 mm. Tests performed with human data proved that the registration algorithm is accurate (0.1 mm mean distance between homologous points) and robust even for facial expression changes. PMID:18364992

  2. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  3. Digital Holography and 3D Imaging: introduction to the joint feature issue in Applied Optics and Journal of the Optical Society of America B.

    Science.gov (United States)

    Banerjee, Partha P; Osten, Wolfgang; Picart, Pascal; Cao, Liangcai; Nehmetallah, George

    2017-05-01

    The OSA Topical Meeting on Digital Holography and 3D Imaging (DH) was held 25-28 July 2016 in Heidelberg, Germany, as part of the Imaging Congress. Feature issues based on the DH meeting series have been released by Applied Optics (AO) since 2007. This year, AO and the Journal of the Optical Society of America B (JOSA B) jointly decided to have one such feature issue in each journal. This feature issue includes 31 papers in AO and 11 in JOSA B, and covers a large range of topics, reflecting the rapidly expanding techniques and applications of digital holography and 3D imaging. The upcoming DH meeting (DH 2017) will be held from 29 May to 1 June in Jeju Island, South Korea.

  4. Intraoperative handheld probe for 3D imaging of pediatric benign vocal fold lesions using optical coherence tomography (Conference Presentation)

    Science.gov (United States)

    Benboujja, Fouzi; Garcia, Jordan; Beaudette, Kathy; Strupler, Mathias; Hartnick, Christopher J.; Boudoux, Caroline

    2016-02-01

    Excessive and repetitive force applied on vocal fold tissue can induce benign vocal fold lesions. Children affected suffer from chronic hoarseness. In this instance, the vibratory ability of the folds, a complex layered microanatomy, becomes impaired. Histological findings have shown that lesions produce a remodeling of sup-epithelial vocal fold layers. However, our understanding of lesion features and development is still limited. Indeed, conventional imaging techniques do not allow a non-invasive assessment of sub-epithelial integrity of the vocal fold. Furthermore, it remains challenging to differentiate these sub-epithelial lesions (such as bilateral nodules, polyps and cysts) from a clinical perspective, as their outer surfaces are relatively similar. As treatment strategy differs for each lesion type, it is critical to efficiently differentiate sub-epithelial alterations involved in benign lesions. In this study, we developed an optical coherence tomography (OCT) based handheld probe suitable for pediatric laryngological imaging. The probe allows for rapid three-dimensional imaging of vocal fold lesions. The system is adapted to allow for high-resolution intra-operative imaging. We imaged 20 patients undergoing direct laryngoscopy during which we looked at different benign pediatric pathologies such as bilateral nodules, cysts and laryngeal papillomatosis and compared them to healthy tissue. We qualitatively and quantitatively characterized laryngeal pathologies and demonstrated the added advantage of using 3D OCT imaging for lesion discrimination and margin assessment. OCT evaluation of the integrity of the vocal cord could yield to a better pediatric management of laryngeal diseases.

  5. High-resolution 3-D imaging of surface damage sites in fused silica with Optical Coherence Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guss, G; Bass, I; Hackel, R; Mailhiot, C; Demos, S G

    2007-10-30

    In this work, we present the first successful demonstration of a non-contact technique to precisely measure the 3D spatial characteristics of laser induced surface damage sites in fused silica for large aperture laser systems by employing Optical Coherence Tomography (OCT). What makes OCT particularly interesting in the characterization of optical materials for large aperture laser systems is that its axial resolution can be maintained with working distances greater than 5 cm, whether viewing through air or through the bulk of thick optics. Specifically, when mitigating surface damage sites against further growth by CO{sub 2} laser evaporation of the damage, it is important to know the depth of subsurface cracks below the damage site. These cracks are typically obscured by the damage rubble when imaged from above the surface. The results to date clearly demonstrate that OCT is a unique and valuable tool for characterizing damage sites before and after the mitigation process. We also demonstrated its utility as an in-situ diagnostic to guide and optimize our process when mitigating surface damage sites on large, high-value optics.

  6. Miniaturized 3D microscope imaging system

    Science.gov (United States)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  7. Short term reproducibility of a high contrast 3-D isotropic optic nerve imaging sequence in healthy controls

    Science.gov (United States)

    Harrigan, Robert L.; Smith, Alex K.; Mawn, Louise A.; Smith, Seth A.; Landman, Bennett A.

    2016-03-01

    The optic nerve (ON) plays a crucial role in human vision transporting all visual information from the retina to the brain for higher order processing. There are many diseases that affect the ON structure such as optic neuritis, anterior ischemic optic neuropathy and multiple sclerosis. Because the ON is the sole pathway for visual information from the retina to areas of higher level processing, measures of ON damage have been shown to correlate well with visual deficits. Increased intracranial pressure has been shown to correlate with the size of the cerebrospinal fluid (CSF) surrounding the ON. These measures are generally taken at an arbitrary point along the nerve and do not account for changes along the length of the ON. We propose a high contrast and high-resolution 3-D acquired isotropic imaging sequence optimized for ON imaging. We have acquired scan-rescan data using the optimized sequence and a current standard of care protocol for 10 subjects. We show that this sequence has superior contrast-to-noise ratio to the current standard of care while achieving a factor of 11 higher resolution. We apply a previously published automatic pipeline to segment the ON and CSF sheath and measure the size of each individually. We show that these measures of ON size have lower short- term reproducibility than the population variance and the variability along the length of the nerve. We find that the proposed imaging protocol is (1) useful in detecting population differences and local changes and (2) a promising tool for investigating biomarkers related to structural changes of the ON.

  8. Comparison of 3D double inversion recovery and 2D STIR FLAIR MR sequences for the imaging of optic neuritis: pilot study

    Energy Technology Data Exchange (ETDEWEB)

    Hodel, Jerome; Bocher, Anne-Laure; Pruvo, Jean-Pierre; Leclerc, Xavier [Hopital Roger Salengro, Department of Neuroradiology, Lille (France); Outteryck, Olivier; Zephir, Helene; Vermersch, Patrick [Hopital Roger Salengro, Department of Neurology, Lille (France); Lambert, Oriane [Fondation Ophtalmologique Rothschild, Department of Neuroradiology, Paris (France); Benadjaoud, Mohamed Amine [Radiation Epidemiology Team, Inserm, CESP Centre for Research in Epidemiology and Population Health, U1018, Villejuif (France); Chechin, David [Philips Medical Systems, Suresnes (France)

    2014-12-15

    We compared the three-dimensional (3D) double inversion recovery (DIR) magnetic resonance imaging (MRI) sequence with the coronal two-dimensional (2D) short tau inversion recovery (STIR) fluid-attenuated inversion recovery (FLAIR) for the detection of optic nerve signal abnormality in patients with optic neuritis (ON). The study group consisted of 31 patients with ON (44 pathological nerves) confirmed by visual-evoked potentials used as the reference. MRI examinations included 2D coronal STIR FLAIR and 3D DIR with 3-mm coronal reformats to match with STIR FLAIR. Image artefacts were graded for each portion of the optic nerves. Each set of MR images (2D STIR FLAIR, DIR reformats and multiplanar 3D DIR) was examined independently and separately for the detection of signal abnormality. Cisternal portion of optic nerves was better delineated with DIR (p < 0.001), while artefacts impaired analysis in four patients with STIR FLAIR. Inter-observer agreement was significantly improved (p < 0.001) on 3D DIR (κ = 0.96) compared with STIR FLAIR images (κ = 0.60). Multiplanar DIR images reached the best performance for the diagnosis of ON (95 % sensitive and 94 % specific). Our study showed a high sensitivity and specificity of 3D DIR compared with STIR FLAIR for the detection of ON. These findings suggest that the 3D DIR sequence may be more useful in patients suspected of ON. (orig.)

  9. Localization of Objects Using the Ms Windows Kinect 3D Optical Device with Utilization of the Depth Image Technology

    Directory of Open Access Journals (Sweden)

    Ján VACHÁLEK

    2015-11-01

    Full Text Available The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS. The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the indepth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our case, balls with fixed diameter were used as objects for 3D location.

  10. Localization of Objects Using the Ms Windows Kinect 3D Optical Device with Utilization of the Depth Image Technology

    OpenAIRE

    Ján VACHÁLEK; Marian GÉCI; Oliver ROVNÝ; Tomáš VOLENSKÝ

    2015-01-01

    The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS). The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the indepth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our ca...

  11. Three-Dimensional Optical Coherence Tomography (3D OCT) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Applied Science Innovations, Inc. proposes to develop a new tool of 3D optical coherence tomography (OCT) for cellular level imaging at video frame rates and...

  12. Three-Dimensional Optical Coherence Tomography (3D OCT) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Applied Science Innovations, Inc. proposes a new tool of 3D optical coherence tomography (OCT) for cellular level imaging at video frame rates and dramatically...

  13. High resolution 3-D wavelength diversity imaging

    Science.gov (United States)

    Farhat, N. H.

    1981-09-01

    A physical optics, vector formulation of microwave imaging of perfectly conducting objects by wavelength and polarization diversity is presented. The results provide the theoretical basis for optimal data acquisition and three-dimensional tomographic image retrieval procedures. These include: (a) the selection of highly thinned (sparse) receiving array arrangements capable of collecting large amounts of information about remote scattering objects in a cost effective manner and (b) techniques for 3-D tomographic image reconstruction and display in which polarization diversity data is fully accounted for. Data acquisition employing a highly attractive AMTDR (Amplitude Modulated Target Derived Reference) technique is discussed and demonstrated by computer simulation. Equipment configuration for the implementation of the AMTDR technique is also given together with a measurement configuration for the implementation of wavelength diversity imaging in a roof experiment aimed at imaging a passing aircraft. Extension of the theory presented to 3-D tomographic imaging of passive noise emitting objects by spectrally selective far field cross-correlation measurements is also given. Finally several refinements made in our anechoic-chamber measurement system are shown to yield drastic improvement in performance and retrieved image quality.

  14. Light field display and 3D image reconstruction

    Science.gov (United States)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  15. Multi-resolution optical 3D sensor

    Science.gov (United States)

    Kühmstedt, Peter; Heinze, Matthias; Schmidt, Ingo; Breitbarth, Martin; Notni, Gunther

    2007-06-01

    A new multi resolution self calibrating optical 3D measurement system using fringe projection technique named "kolibri FLEX multi" will be presented. It can be utilised to acquire the all around shape of small to medium objects, simultaneously. The basic measurement principle is the phasogrammetric approach /1,2,3/ in combination with the method of virtual landmarks for the merging of the 3D single views. The system consists in minimum of two fringe projection sensors. The sensors are mounted on a rotation stage illuminating the object from different directions. The measurement fields of the sensors can be chosen different, here as an example 40mm and 180mm in diameter. In the measurement the object can be scanned at the same time with these two resolutions. Using the method of virtual landmarks both point clouds are calculated within the same world coordinate system resulting in a common 3D-point cloud. The final point cloud includes the overview of the object with low point density (wide field) and a region with high point density (focussed view) at the same time. The advantage of the new method is the possibility to measure with different resolutions at the same object region without any mechanical changes in the system or data post processing. Typical parameters of the system are: the measurement time is 2min for 12 images and the measurement accuracy is below 3μm up to 10 μm. The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  16. Augmented reality 3D display based on integral imaging

    Science.gov (United States)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  17. Research of 3D display using anamorphic optics

    Science.gov (United States)

    Matsumoto, Kenji; Honda, Toshio

    1997-05-01

    This paper describes the auto-stereoscopic display which can reconstruct more reality and viewer friendly 3-D image by increasing the number of parallaxes and giving motion parallax horizontally. It is difficult to increase number of parallaxes to give motion parallax to the 3-D image without reducing the resolution, because the resolution of display device is insufficient. The magnification and the image formation position can be selected independently in horizontal direction and the vertical direction by projecting between the display device and the 3-D image with the anamorphic optics. The anamorphic optics is an optics system with different magnification in horizontal direction and the vertical direction. It consists of the combination of cylindrical lenses with different focal length. By using this optics, even if we use a dynamic display such as liquid crystal display (LCD), it is possible to display the realistic 3-D image having motion parallax. Motion parallax is obtained by assuming width of the single parallax at the viewing position to be about the same size as the pupil diameter of viewer. In addition, because the focus depth of the 3-D image is deep in this method, conflict of accommodation and convergence is small, and natural 3-D image can be displayed.

  18. FIT3D: Fitting optical spectra

    Science.gov (United States)

    Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosales-Ortega, F. F.; Cano-Díaz, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.

    2016-09-01

    FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.

  19. SU-E-T-296: Dosimetric Analysis of Small Animal Image-Guided Irradiator Using High Resolution Optical CT Imaging of 3D Dosimeters

    Energy Technology Data Exchange (ETDEWEB)

    Na, Y; Qian, X; Wuu, C [Columbia University, New York, NY (United States); Adamovics, J [John Adamovics, Skillman, NJ (United States)

    2015-06-15

    Purpose: To verify the dosimetric characteristics of a small animal image-guided irradiator using a high-resolution of optical CT imaging of 3D dosimeters. Methods: PRESAEGE 3D dosimeters were used to determine dosimetric characteristics of a small animal image-guided irradiator and compared with EBT2 films. Cylindrical PRESAGE dosimeters with 7cm height and 6cm diameter were placed along the central axis of the beam. The films were positioned between 6×6cm{sup 2} cubed plastic water phantoms perpendicular to the beam direction with multiple depths. PRESAGE dosimeters and EBT2 films were then irradiated with the irradiator beams at 220kVp and 13mA. Each of irradiated PRESAGE dosimeters named PA1, PA2, PB1, and PB2, was independently scanned using a high-resolution single laser beam optical CT scanner. The transverse images were reconstructed with a 0.1mm high-resolution pixel. A commercial Epson Expression 10000XL flatbed scanner was used for readout of irradiated EBT2 films at a 0.4mm pixel resolution. PDD curves and beam profiles were measured for the irradiated PRESAGE dosimeters and EBT2 films. Results: The PDD agreements between the irradiated PRESAGE dosimeter PA1, PA2, PB1, PB2 and the EB2 films were 1.7, 2.3, 1.9, and 1.9% for the multiple depths at 1, 5, 10, 15, 20, 30, 40 and 50mm, respectively. The FWHM measurements for each PRESAEGE dosimeter and film agreed with 0.5, 1.1, 0.4, and 1.7%, respectively, at 30mm depth. Both PDD and FWHM measurements for the PRESAGE dosimeters and the films agreed overall within 2%. The 20%–80% penumbral widths of each PRESAGE dosimeter and the film at a given depth were respectively found to be 0.97, 0.91, 0.79, 0.88, and 0.37mm. Conclusion: Dosimetric characteristics of a small animal image-guided irradiator have been demonstrated with the measurements of PRESAGE dosimeter and EB2 film. With the high resolution and accuracy obtained from this 3D dosimetry system, precise targeting small animal irradiation can be

  20. 3D micro-particle image modeling and its application in measurement resolution investigation for visual sensing based axial localization in an optical microscope

    Science.gov (United States)

    Wang, Yuliang; Li, Xiaolai; Bi, Shusheng; Zhu, Xiaofeng; Liu, Jinhua

    2017-01-01

    Visual sensing based three dimensional (3D) particle localization in an optical microscope is important for both fundamental studies and practical applications. Compared with the lateral (X and Y) localization, it is more challenging to achieve a high resolution measurement of axial particle location. In this study, we aim to investigate the effect of different factors on axial measurement resolution through an analytical approach. Analytical models were developed to simulate 3D particle imaging in an optical microscope. A radius vector projection method was applied to convert the simulated particle images into radius vectors. With the obtained radius vectors, a term of axial changing rate was proposed to evaluate the measurement resolution of axial particle localization. Experiments were also conducted for comparison with that obtained through simulation. Moreover, with the proposed method, the effects of particle size on measurement resolution were discussed. The results show that the method provides an efficient approach to investigate the resolution of axial particle localization.

  1. 3D nanopillar optical antenna photodetectors.

    Science.gov (United States)

    Senanayake, Pradeep; Hung, Chung-Hong; Shapiro, Joshua; Scofield, Adam; Lin, Andrew; Williams, Benjamin S; Huffaker, Diana L

    2012-11-05

    We demonstrate 3D surface plasmon photoresponse in nanopillar arrays resulting in enhanced responsivity due to both Localized Surface Plasmon Resonances (LSPRs) and Surface Plasmon Polariton Bloch Waves (SPP-BWs). The LSPRs are excited due to a partial gold shell coating the nanopillar which acts as a 3D Nanopillar Optical Antenna (NOA) in focusing light into the nanopillar. Angular photoresponse measurements show that SPP-BWs can be spectrally coincident with LSPRs to result in a x2 enhancement in responsivity at 1180 nm. Full-wave Finite Difference Time Domain (FDTD) simulations substantiate both the spatial and spectral coupling of the SPP-BW / LSPR for enhanced absorption and the nature of the LSPR. Geometrical control of the 3D NOA and the self-aligned metal hole lattice allows the hybridization of both localized and propagating surface plasmon modes for enhanced absorption. Hybridized plasmonic modes opens up new avenues in optical antenna design in nanoscale photodetectors.

  2. 3D Backscatter Imaging System

    Science.gov (United States)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  3. Optical coherence tomography for ultrahigh-resolution 3D imaging of cell development and real-time guiding for photodynamic therapy

    Science.gov (United States)

    Wang, Tianshi; Zhen, Jinggao; Wang, Bo; Xue, Ping

    2009-11-01

    Optical coherence tomography is a new emerging technique for cross-sectional imaging with high spatial resolution of micrometer scale. It enables in vivo and non-invasive imaging with no need to contact the sample and is widely used in biological and clinic application. In this paper optical coherence tomography is demonstrated for both biological and clinic applications. For biological application, a white-light interference microscope is developed for ultrahigh-resolution full-field optical coherence tomography (full-field OCT) to implement 3D imaging of biological tissue. Spatial resolution of 0.9μm×1.1μm (transverse×axial) is achieved A system sensitivity of 85 dB is obtained at an acquisition time of 5s per image. The development of a mouse embryo is studied layer by layer with our ultrahigh-resolution full-filed OCT. For clinic application, a handheld optical coherence tomography system is designed for real-time and in situ imaging of the port wine stains (PWS) patient and supplying surgery guidance for photodynamic therapy (PDT) treatment. The light source with center wavelength of 1310nm, -3 dB wavelength range of 90 nm and optical power of 9mw is utilized. Lateral resolution of 8 μm and axial resolution of 7μm at a rate of 2 frames per second and with 102dB sensitivity are achieved in biological tissue. It is shown that OCT images distinguish very well the normal and PWS tissues in clinic and are good to serve as a valuable diagnosis tool for PDT treatment.

  4. Dynamic contrast-enhanced 3D photoacoustic imaging

    Science.gov (United States)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  5. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  6. 3D imaging in forensic odontology.

    Science.gov (United States)

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  7. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina.

    Science.gov (United States)

    Zawadzki, Robert J; Zhang, Pengfei; Zam, Azhar; Miller, Eric B; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G; Werner, John S; Burns, Marie E; Pugh, Edward N

    2015-06-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed.

  8. Research on the aero-thermal effects by 3D analysis model of the optical window of the infrared imaging guidance

    Science.gov (United States)

    Xu, Bo; Li, Lin; Zhu, Ying

    2014-11-01

    Researches on hypersonic vehicles have been a hotspot in the field of aerospace because of the pursuits for higher speed by human being. Infrared imaging guidance is playing a very important role in modern warfare. When an Infrared Ray(IR) imaging guided missile is flying in the air at high speed, its optical dome suffers from serious aero-optic effects because of air flow. The turbulence around the dome and the thermal effects of the optical window would cause disturbance to the wavefront from the target. Therefore, detected images will be biased, dithered and blurred, and the capabilities of the seeker for detecting, tracking and recognizing are weakened. In this paper, methods for thermal and structural analysis with Heat Transfer and Elastic Mechanics are introduced. By studying the aero-thermal effects and aero-thermal radiation effects of the optical window, a 3D analysis model of the optical window is established by using finite element method. The direct coupling analysis is employed as a solving strategy. The variation regularity of the temperature field is obtained. For light with different incident angles, the influence on the ray propagation caused by window deformation is analyzed with theoretical calculation and optical/thermal/structural integrated analysis method respectively.

  9. 3D laser imaging for concealed object identification

    Science.gov (United States)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  10. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  11. Manufacturing: 3D printed micro-optics

    Science.gov (United States)

    Juodkazis, Saulius

    2016-08-01

    Uncompromised performance of micro-optical compound lenses has been achieved by high-fidelity shape definition during two-photon absorption microfabrication. The lenses have been made directly onto image sensors and even onto the tip of an optic fibre.

  12. Structured light field 3D imaging.

    Science.gov (United States)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-05

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

  13. Automatic respiration tracking for radiotherapy using optical 3D camera

    Science.gov (United States)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  14. Spatially-resolved in-situ quantification of biofouling using optical coherence tomography (OCT) and 3D image analysis in a spacer filled channel

    KAUST Repository

    Fortunato, Luca

    2016-11-21

    The use of optical coherence tomography (OCT) to investigate biomass in membrane systems has increased with time. OCT is able to characterize the biomass in-situ and non-destructively. In this study, a novel approach to process three-dimensional (3D) OCT scans is proposed. The approach allows obtaining spatially-resolved detailed structural biomass information. The 3D biomass reconstruction enables analysis of the biomass only, obtained by subtracting the time zero scan to all images. A 3D time series analysis of biomass development in a spacer filled channel under representative conditions (cross flow velocity) for a spiral wound membrane element was performed. The flow cell was operated for five days with monitoring of ultrafiltration membrane performance: feed channel pressure drop and permeate flux. The biomass development in the flow cell was detected by OCT before a performance decline was observed. Feed channel pressure drop continuously increased with increasing biomass volume, while flux decline was mainly affected in the initial phase of biomass accumulation. The novel OCT imaging approach enabled the assessment of spatial biomass distribution in the flow cell, discriminating the total biomass volume between the membrane, feed spacer and glass window. Biomass accumulation was stronger on the feed spacer during the early stage of biofouling, impacting the feed channel pressure drop stronger than permeate flux.

  15. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping o......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....

  16. Heat Equation to 3D Image Segmentation

    Directory of Open Access Journals (Sweden)

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  17. Diffractive centrosymmetric 3D-transmission phase gratings positioned at the image plane of optical systems transform lightlike 4D-WORLD as tunable resonators into spectral metrics...

    Science.gov (United States)

    Lauinger, Norbert

    1999-08-01

    Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.

  18. Feasibility of 3D harmonic contrast imaging

    NARCIS (Netherlands)

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; ten Cate, F.; de Jong, N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it

  19. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  20. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  1. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    OpenAIRE

    Seniutinas Gediminas; Balčytis Armandas; Reklaitis Ignas; Chen Feng; Davis Jeffrey; David Christian; Juodkazis Saulius

    2017-01-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of ...

  2. Micromachined Ultrasonic Transducers for 3-D Imaging

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann

    such transducer arrays, capacitive micromachined ultrasonic transducer (CMUT) technology is chosen for this project. Properties such as high bandwidth and high design flexibility makes this an attractive transducer technology, which is under continuous development in the research community. A theoretical...... of state-of-the-art 3-D ultrasound systems. The focus is on row-column addressed transducer arrays. This previously sparsely investigated addressing scheme offers a highly reduced number of transducer elements, resulting in reduced transducer manufacturing costs and data processing. To produce......Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...

  3. Imaging mesenchymal stem cells containing single wall nanotube nanoprobes in a 3D scaffold using photo-thermal optical coherence tomography

    Science.gov (United States)

    Connolly, Emma; Subhash, Hrebesh M.; Leahy, Martin; Rooney, Niall; Barry, Frank; Murphy, Mary; Barron, Valerie

    2014-02-01

    Despite the fact, that a range of clinically viable imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), photo emission tomography (PET), ultrasound and bioluminescence imaging are being optimised to track cells in vivo, many of these techniques are subject to limitations such as the levels of contrast agent required, toxic effects of radiotracers, photo attenuation of tissue and backscatter. With the advent of nanotechnology, nanoprobes are leading the charge to overcome these limitations. In particular, single wall nanotubes (SWNT) have been shown to be taken up by cells and as such are effective nanoprobes for cell imaging. Consequently, the main aim of this research is to employ mesenchymal stem cells (MSC) containing SWNT nanoprobes to image cell distribution in a 3D scaffold for cartilage repair. To this end, MSC were cultured in the presence of 32μg/ml SWNT in cell culture medium (αMEM, 10% FBS, 1% penicillin/streptomycin) for 24 hours. Upon confirmation of cell viability, the MSC containing SWNT were encapsulated in hyaluronic acid gels and loaded on polylactic acid polycaprolactone scaffolds. After 28 days in complete chondrogenic medium, with medium changes every 2 days, chondrogenesis was confirmed by the presence of glycosaminoglycan. Moreover, using photothermal optical coherence tomography (PT-OCT), the cells were seen to be distributed through the scaffold with high resolution. In summary, these data reveal that MSC containing SWNT nanoprobes in combination with PT-OCT offer an exciting opportunity for stem cell tracking in vitro for assessing seeding scaffolds and in vivo for determining biodistribution.

  4. 3D Human cartilage surface characterization by optical coherence tomography

    Science.gov (United States)

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D

  5. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  6. 3D Beam Reconstruction by Fluorescence Imaging

    CERN Document Server

    Radwell, Neal; Franke-Arnold, Sonja

    2013-01-01

    We present a technique for mapping the complete 3D spatial intensity profile of a laser beam from its fluorescence in an atomic vapour. We propagate shaped light through a rubidium vapour cell and record the resonant scattering from the side. From a single measurement we obtain a camera limited resolution of 200 x 200 transverse points and 659 longitudinal points. In constrast to invasive methods in which the camera is placed in the beam path, our method is capable of measuring patterns formed by counterpropagating laser beams. It has high resolution in all 3 dimensions, is fast and can be completely automated. The technique has applications in areas which require complex beam shapes, such as optical tweezers, atom trapping and pattern formation.

  7. Volumetric (3D) compressive sensing spectral domain optical coherence tomography.

    Science.gov (United States)

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-11-01

    In this work, we proposed a novel three-dimensional compressive sensing (CS) approach for spectral domain optical coherence tomography (SD OCT) volumetric image acquisition and reconstruction. Instead of taking a spectral volume whose size is the same as that of the volumetric image, our method uses a sub set of the original spectral volume that is under-sampled in all three dimensions, which reduces the amount of spectral measurements to less than 20% of that required by the Shan-non/Nyquist theory. The 3D image is recovered from the under-sampled spectral data dimension-by-dimension using the proposed three-step CS reconstruction strategy. Experimental results show that our method can significantly reduce the sampling rate required for a volumetric SD OCT image while preserving the image quality.

  8. Model-based optical metrology and visualization of 3-D complex objects

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-li; LI A-meng; ZHAO Xiao-bo; GAO Peng-dong; TIAN Jin-dong; PENG Xiang

    2007-01-01

    This letter addresses several key issues in the process of model-based optical metrology, including three dimensional (3D) sensing, calibration, registration and fusion of range images, geometric representation, and visualization of reconstructed 3D model by taking into account the shape measurement of 3D complex structures,and some experimental results are presented.

  9. The Atlas-3D project - IX. The merger origin of a fast and a slow rotating Early-Type Galaxy revealed with deep optical imaging: first results

    CERN Document Server

    Duc, Pierre-Alain; Serra, Paolo; Michel-Dansac, Leo; Ferriere, Etienne; Alatalo, Katherine; Blitz, Leo; Bois, Maxime; Bournaud, Frederic; Bureau, Martin; Cappellari, Michele; Davies, Roger L; Davis, Timothy A; de Zeeuw, P T; Emsellem, Eric; Khochfar, Sadegh; Krajnovic, Davor; Kuntschner, Harald; Lablanche, Pierre-Yves; McDermid, Richard M; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Sarzi, Marc; Scott, Nicholas; Weijmans, Anne-Marie; Young, Lisa M

    2011-01-01

    The mass assembly of galaxies leaves imprints in their outskirts, such as shells and tidal tails. The frequency and properties of such fine structures depend on the main acting mechanisms - secular evolution, minor or major mergers - and on the age of the last substantial accretion event. We use this to constrain the mass assembly history of two apparently relaxed nearby Early-Type Galaxies (ETGs) selected from the Atlas-3D sample, NGC 680 and NGC 5557. Our ultra deep optical images obtained with MegaCam on the Canada-France-Hawaii Telescope reach 29 mag/arcsec^2 in the g-band. They reveal very low-surface brightness (LSB) filamentary structures around these ellipticals. Among them, a gigantic 160 kpc long tail East of NGC 5557 hosts gas-rich star-forming objects. NGC 680 exhibits two major diffuse plumes apparently connected to extended HI tails, as well as a series of arcs and shells. Comparing the outer stellar and gaseous morphology of the two ellipticals with that predicted from models of colliding galax...

  10. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  11. Imaging a Sustainable Future in 3D

    Science.gov (United States)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  12. 3D Image Reconstruction from Compton camera data

    CERN Document Server

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  13. Optical characterization of different types of 3D displays

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    All 3D displays have the same intrinsic method to induce depth perception. They provide different images in the left and right eye of the observer to obtain the stereoscopic effect. The three most common solutions already available on the market are active glass, passive glass and auto-stereoscopic 3D displays. The three types of displays are based on different physical principle (polarization, time selection or spatial emission) and consequently require different measurement instruments and techniques. In the proposed paper, we present some of these solutions and the technical characteristics that can be obtained to compare the displays. We show in particular that local and global measurements can be made in the three cases to access to different characteristics. We also discuss the new technologies currently under development and their needs in terms of optical characterization.

  14. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Science.gov (United States)

    Seniutinas, Gediminas; Balčytis, Armandas; Reklaitis, Ignas; Chen, Feng; Davis, Jeffrey; David, Christian; Juodkazis, Saulius

    2017-06-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1-100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics) within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  15. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Directory of Open Access Journals (Sweden)

    Seniutinas Gediminas

    2017-06-01

    Full Text Available The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM, focused ion beam (FIB milling/imaging, and atomic force microscopy (AFM. Fabrication and in situ imaging of materials undergoing a three-dimensional (3D nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  16. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  17. Progress in 3D imaging and display by integral imaging

    Science.gov (United States)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  18. Test target for characterizing 3D resolution of optical coherence tomography

    Science.gov (United States)

    Hu, Zhixiong; Hao, Bingtao; Liu, Wenli; Hong, Baoyu; Li, Jiao

    2014-12-01

    Optical coherence tomography (OCT) is a non-invasive 3D imaging technology which has been applied or investigated in many diagnostic fields including ophthalmology, dermatology, dentistry, cardiovasology, endoscopy, brain imaging and so on. Optical resolution is an important characteristic that can describe the quality and utility of an image acquiring system. We employ 3D printing technology to design and fabricate a test target for characterizing 3D resolution of optical coherence tomography. The test target which mimics USAF 1951 test chart was produced with photopolymer. By measuring the 3D test target, axial resolution as well as lateral resolution of a spectral domain OCT system was evaluated. For comparison, conventional microscope and surface profiler were employed to characterize the 3D test targets. The results demonstrate that the 3D resolution test targets have the potential of qualitatively and quantitatively validating the performance of OCT systems.

  19. 3D ultrasound imaging in image-guided intervention.

    Science.gov (United States)

    Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare

    2014-01-01

    Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.

  20. Optical experiments on 3D photonic crystals

    NARCIS (Netherlands)

    Koenderink, F.; Vos, W.

    2003-01-01

    Photonic crystals are optical materials that have an intricate structure with length scales of the order of the wavelength of light. The flow of photons is controlled in a manner analogous to how electrons propagate through semiconductor crystals, i.e., by Bragg diffraction and the formation of band

  1. 3D/2D Registration of medical images

    OpenAIRE

    Tomaževič, D.

    2008-01-01

    The topic of this doctoral dissertation is registration of 3D medical images to corresponding projective 2D images, referred to as 3D/2D registration. There are numerous possible applications of 3D/2D registration in image-aided diagnosis and treatment. In most of the applications, 3D/2D registration provides the location and orientation of the structures in a preoperative 3D CT or MR image with respect to intraoperative 2D X-ray images. The proposed doctoral dissertation tries to find origin...

  2. Large distance 3D imaging of hidden objects

    Science.gov (United States)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  3. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  4. A system for finding a 3D target without a 3D image

    Science.gov (United States)

    West, Jay B.; Maurer, Calvin R., Jr.

    2008-03-01

    We present here a framework for a system that tracks one or more 3D anatomical targets without the need for a preoperative 3D image. Multiple 2D projection images are taken using a tracked, calibrated fluoroscope. The user manually locates each target on each of the fluoroscopic views. A least-squares minimization algorithm triangulates the best-fit position of each target in the 3D space of the tracking system: using the known projection matrices from 3D space into image space, we use matrix minimization to find the 3D position that projects closest to the located target positions in the 2D images. A tracked endoscope, whose projection geometry has been pre-calibrated, is then introduced to the operating field. Because the position of the targets in the tracking space is known, a rendering of the targets may be projected onto the endoscope view, thus allowing the endoscope to be easily brought into the target vicinity even when the endoscope field of view is blocked, e.g. by blood or tissue. An example application for such a device is trauma surgery, e.g., removal of a foreign object. Time, scheduling considerations and concern about excessive radiation exposure may prohibit the acquisition of a 3D image, such as a CT scan, which is required for traditional image guidance systems; it is however advantageous to have 3D information about the target locations available, which is not possible using fluoroscopic guidance alone.

  5. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  6. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix...

  7. Rainbow Particle Imaging Velocimetry for Dense 3D Fluid Velocity Imaging

    KAUST Repository

    Xiong, Jinhui

    2017-04-11

    Despite significant recent progress, dense, time-resolved imaging of complex, non-stationary 3D flow velocities remains an elusive goal. In this work we tackle this problem by extending an established 2D method, Particle Imaging Velocimetry, to three dimensions by encoding depth into color. The encoding is achieved by illuminating the flow volume with a continuum of light planes (a “rainbow”), such that each depth corresponds to a specific wavelength of light. A diffractive component in the camera optics ensures that all planes are in focus simultaneously. For reconstruction, we derive an image formation model for recovering stationary 3D particle positions. 3D velocity estimation is achieved with a variant of 3D optical flow that accounts for both physical constraints as well as the rainbow image formation model. We evaluate our method with both simulations and an experimental prototype setup.

  8. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  9. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  10. Glasses-free 3D viewing systems for medical imaging

    Science.gov (United States)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  11. Joint calibration of 3D resist image and CDSEM

    Science.gov (United States)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  12. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Science.gov (United States)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  13. Dynamic 3D computed tomography scanner for vascular imaging

    Science.gov (United States)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  14. A 3D Optical Metamaterial Made by Self-Assembly

    KAUST Repository

    Vignolini, Silvia

    2011-10-24

    Optical metamaterials have unusual optical characteristics that arise from their periodic nanostructure. Their manufacture requires the assembly of 3D architectures with structure control on the 10-nm length scale. Such a 3D optical metamaterial, based on the replication of a self-assembled block copolymer into gold, is demonstrated. The resulting gold replica has a feature size that is two orders of magnitude smaller than the wavelength of visible light. Its optical signature reveals an archetypal Pendry wire metamaterial with linear and circular dichroism. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Monocular 3D display unit using soft actuator for parallax image shift

    Science.gov (United States)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  16. Parallel Processor for 3D Recovery from Optical Flow

    Directory of Open Access Journals (Sweden)

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  17. The Atlas3D project -- XXIX. The new look of early-type galaxies and surrounding fields disclosed by extremely deep optical images

    CERN Document Server

    Duc, Pierre-Alain; Karabal, Emin; Cappellari, Michele; Alatalo, Katherine; Blitz, Leo; Bournaud, Frederic; Bureau, Martin; Crocker, Alison F; Davies, Roger L; Davis, Timothy A; de Zeeuw, P T; Emsellem, Eric; Khochfar, Sadegh; Krajnovic, Davor; Kuntschner, Harald; McDermid, Richard M; Michel-Dansac, Leo; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Paudel, Sanjaya; Sarzi, Marc; Scott, Nicholas; Serra, Paolo; Weijmans, Anne-Marie; Young, Lisa M

    2014-01-01

    Galactic archeology based on star counts is instrumental to reconstruct the past mass assembly of Local Group galaxies. The development of new observing techniques and data-reduction, coupled with the use of sensitive large field of view cameras, now allows us to pursue this technique in more distant galaxies exploiting their diffuse low surface brightness (LSB) light. As part of the Atlas3D project, we have obtained with the MegaCam camera at the Canada-France Hawaii Telescope extremely deep, multi--band, images of nearby early-type galaxies. We present here a catalog of 92 galaxies from the Atlas3D sample, that are located in low to medium density environments. The observing strategy and data reduction pipeline, that achieve a gain of several magnitudes in the limiting surface brightness with respect to classical imaging surveys, are presented. The size and depth of the survey is compared to other recent deep imaging projects. The paper highlights the capability of LSB--optimized surveys at detecting new pr...

  18. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  19. Improved 3D Superresolution Localization Microscopy Using Adaptive Optics

    CERN Document Server

    Piro, Nicolas; Olivier, Nicolas; Manley, Suliana

    2014-01-01

    We demonstrate a new versatile method for 3D super-resolution microscopy by using a deformable mirror to shape the point spread function of our microscope in a continuous and controllable way. We apply this for 3D STORM imaging of microtubules.

  20. Development of 3D microwave imaging reflectometry in LHD (invited).

    Science.gov (United States)

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  1. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Directory of Open Access Journals (Sweden)

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  2. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

    Science.gov (United States)

    Zhang, Renyuan; Cao, Siyang

    2017-01-01

    In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper. PMID:28629140

  3. From medical imaging data to 3D printed anatomical models.

    Science.gov (United States)

    Bücking, Thore M; Hill, Emma R; Robertson, James L; Maneas, Efthymios; Plumb, Andrew A; Nikitichev, Daniil I

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  4. The ATLAS3D project - XXIX. The new look of early-type galaxies and surrounding fields disclosed by extremely deep optical images

    Science.gov (United States)

    Duc, Pierre-Alain; Cuillandre, Jean-Charles; Karabal, Emin; Cappellari, Michele; Alatalo, Katherine; Blitz, Leo; Bournaud, Frédéric; Bureau, Martin; Crocker, Alison F.; Davies, Roger L.; Davis, Timothy A.; de Zeeuw, P. T.; Emsellem, Eric; Khochfar, Sadegh; Krajnović, Davor; Kuntschner, Harald; McDermid, Richard M.; Michel-Dansac, Leo; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Paudel, Sanjaya; Sarzi, Marc; Scott, Nicholas; Serra, Paolo; Weijmans, Anne-Marie; Young, Lisa M.

    2015-01-01

    Galactic archaeology based on star counts is instrumental to reconstruct the past mass assembly of Local Group galaxies. The development of new observing techniques and data reduction, coupled with the use of sensitive large field of view cameras, now allows us to pursue this technique in more distant galaxies exploiting their diffuse low surface brightness (LSB) light. As part of the ATLAS3D project, we have obtained with the MegaCam camera at the Canada-France-Hawaii Telescope extremely deep, multiband images of nearby early-type galaxies (ETGs). We present here a catalogue of 92 galaxies from the ATLAS3D sample, which are located in low- to medium-density environments. The observing strategy and data reduction pipeline, which achieve a gain of several magnitudes in the limiting surface brightness with respect to classical imaging surveys, are presented. The size and depth of the survey are compared to other recent deep imaging projects. The paper highlights the capability of LSB-optimized surveys at detecting new prominent structures that change the apparent morphology of galaxies. The intrinsic limitations of deep imaging observations are also discussed, among those, the contamination of the stellar haloes of galaxies by extended ghost reflections, and the cirrus emission from Galactic dust. The detection and systematic census of fine structures that trace the present and past mass assembly of ETGs are one of the prime goals of the project. We provide specific examples of each type of observed structures - tidal tails, stellar streams and shells - and explain how they were identified and classified. We give an overview of the initial results. The detailed statistical analysis will be presented in future papers.

  5. FELIX 3D display: an interactive tool for volumetric imaging

    Science.gov (United States)

    Langhans, Knut; Bahr, Detlef; Bezecny, Daniel; Homann, Dennis; Oltmann, Klaas; Oltmann, Krischan; Guill, Christian; Rieper, Elisabeth; Ardey, Goetz

    2002-05-01

    The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.

  6. An Optically Controlled 3D Cell Culturing System

    Directory of Open Access Journals (Sweden)

    Kelly S. Ishii

    2011-01-01

    Full Text Available A novel 3D cell culture system was developed and tested. The cell culture device consists of a microfluidic chamber on an optically absorbing substrate. Cells are suspended in a thermoresponsive hydrogel solution, and optical patterns are utilized to heat the solution, producing localized hydrogel formation around cells of interest. The hydrogel traps only the desired cells in place while also serving as a biocompatible scaffold for supporting the cultivation of cells in 3D. This is demonstrated with the trapping of MDCK II and HeLa cells. The light intensity from the optically induced hydrogel formation does not significantly affect cell viability.

  7. 3D imaging and wavefront sensing with a plenoptic objective

    Science.gov (United States)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  8. 3D simulation for solitons used in optical fibers

    Science.gov (United States)

    Vasile, F.; Tebeica, C. M.; Schiopu, P.; Vladescu, M.

    2016-12-01

    In this paper is described 3D simulation for solitions used in optical fibers. In the scientific works is started from nonlinear propagation equation and the solitons represents its solutions. This paper presents the simulation of the fundamental soliton in 3D together with simulation of the second order soliton in 3D. These simulations help in the study of the optical fibers for long distances and in the interactions between the solitons. This study helps the understanding of the nonlinear propagation equation and for nonlinear waves. These 3D simulations are obtained using MATLAB programming language, and we can observe fundamental difference between the soliton and the second order/higher order soliton and in their evolution.

  9. 3D Objects Reconstruction from Image Data

    OpenAIRE

    Cír, Filip

    2008-01-01

    Tato práce se zabývá 3D rekonstrukcí z obrazových dat. Jsou popsány možnosti a přístupy k optickému skenování. Ruční optický 3D skener se skládá z kamery a zdroje čárového laseru, který je vzhledem ke kameře upevněn pod určitým úhlem. Je navržena vhodná podložka se značkami a je popsán algoritmus pro jejich real-time detekci. Po detekci značek lze vypočítat pozici a orientaci kamery. Na závěr je popsána detekce laseru a postup při výpočtu bodů na povrchu objektu pomocí triangulace. This pa...

  10. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  11. Optical 3D sensor for large objects in industrial application

    Science.gov (United States)

    Kuhmstedt, Peter; Heinze, Matthias; Himmelreich, Michael; Brauer-Burchardt, Christian; Brakhage, Peter; Notni, Gunther

    2005-06-01

    A new self calibrating optical 3D measurement system using fringe projection technique named "kolibri 1500" is presented. It can be utilised to acquire the all around shape of large objects. The basic measuring principle is the phasogrammetric approach introduced by the authors /1, 2/. The "kolibri 1500" consists of a stationary system with a translation unit for handling of objects. Automatic whole body measurement is achieved by using sensor head rotation and changeable object position, which can be done completely computer controlled. Multi-view measurement is realised by using the concept of virtual reference points. In this way no matching procedures or markers are necessary for the registration of the different images. This makes the system very flexible to realise different measurement tasks. Furthermore, due to self calibrating principle mechanical alterations are compensated. Typical parameters of the system are: the measurement volume extends from 400 mm up to 1500 mm max. length, the measurement time is between 2 min for 12 images up to 20 min for 36 images and the measurement accuracy is below 50μm.The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  12. 3D Imaging with Structured Illumination for Advanced Security Applications

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  13. 3D passive integral imaging using compressive sensing.

    Science.gov (United States)

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

    2012-11-19

    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  14. De la manipulation des images 3D

    Directory of Open Access Journals (Sweden)

    Geneviève Pinçon

    2012-04-01

    Full Text Available Si les technologies 3D livrent un enregistrement précis et pertinent des graphismes pariétaux, elles offrent également des applications particulièrement intéressantes pour leur analyse. À travers des traitements sur nuage de points et des simulations, elles autorisent un large éventail de manipulations touchant autant à l’observation qu’à l’étude des œuvres pariétales. Elles permettent notamment une perception affinée de leur volumétrie, et deviennent des outils de comparaison de formes très utiles dans la reconstruction des chronologies pariétales et dans l’appréhension des analogies entre différents sites. Ces outils analytiques sont ici illustrés par les travaux originaux menés sur les sculptures pariétales des abris du Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne et de la Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente.If 3D technologies allow an accurate and relevant recording of rock art, they also offer several interesting applications for its analysis. Through spots clouds treatments and simulations, they permit a wide range of manipulations concerning figurations observation and study. Especially, they allow a fine perception of their volumetry. They become efficient tools for forms comparisons, very useful in the reconstruction of graphic ensemble chronologies and for inter-sites analogies. These analytical tools are illustrated by the original works done on the sculptures of Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne and Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente rock-shelters.

  15. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  16. Calibration of Images with 3D range scanner data

    OpenAIRE

    Adalid López, Víctor Javier

    2009-01-01

    Projecte fet en col.laboració amb EPFL 3D laser range scanners are used in extraction of the 3D data in a scene. Main application areas are architecture, archeology and city planning. Thought the raw scanner data has a gray scale values, the 3D data can be merged with colour camera image values to get textured 3D model of the scene. Also these devices are able to take a reliable copy in 3D form objects, with a high level of accuracy. Therefore, they scanned scenes can be use...

  17. 3D Ground Penetrating Imaging Radar

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    GPiR (ground-penetrating imaging radar) is a new technology for mapping the shallow subsurface, including society’s underground infrastructure. Applications for this technology include efficient and precise mapping of buried utilities on a large scale.

  18. 3D Reconstruction of NMR Images by LabVIEW

    Directory of Open Access Journals (Sweden)

    Peter IZAK

    2007-01-01

    Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.

  19. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  20. Visualizing Vertebrate Embryos with Episcopic 3D Imaging Techniques

    Directory of Open Access Journals (Sweden)

    Stefan H. Geyer

    2009-01-01

    Full Text Available The creation of highly detailed, three-dimensional (3D computer models is essential in order to understand the evolution and development of vertebrate embryos, and the pathogenesis of hereditary diseases. A still-increasing number of methods allow for generating digital volume data sets as the basis of virtual 3D computer models. This work aims to provide a brief overview about modern volume data–generation techniques, focusing on episcopic 3D imaging methods. The technical principles, advantages, and problems of episcopic 3D imaging are described. The strengths and weaknesses in its ability to visualize embryo anatomy and labeled gene product patterns, specifically, are discussed.

  1. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  2. Holographic Image Plane Projection Integral 3D Display

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  3. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Science.gov (United States)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  4. Optical fabrication of lightweighted 3D printed mirrors

    Science.gov (United States)

    Herzog, Harrison; Segal, Jacob; Smith, Jeremy; Bates, Richard; Calis, Jacob; De La Torre, Alyssa; Kim, Dae Wook; Mici, Joni; Mireles, Jorge; Stubbs, David M.; Wicker, Ryan

    2015-09-01

    Direct Metal Laser Sintering (DMLS) and Electron Beam Melting (EBM) 3D printing technologies were utilized to create lightweight, optical grade mirrors out of AlSi10Mg aluminum and Ti6Al4V titanium alloys at the University of Arizona in Tucson. The mirror prototypes were polished to meet the λ/20 RMS and λ/4 P-V surface figure requirements. The intent of this project was to design topologically optimized mirrors that had a high specific stiffness and low surface displacement. Two models were designed using Altair Inspire software, and the mirrors had to endure the polishing process with the necessary stiffness to eliminate print-through. Mitigating porosity of the 3D printed mirror blanks was a challenge in the face of reconciling new printing technologies with traditional optical polishing methods. The prototypes underwent Hot Isostatic Press (HIP) and heat treatment to improve density, eliminate porosity, and relieve internal stresses. Metal 3D printing allows for nearly unlimited topological constraints on design and virtually eliminates the need for a machine shop when creating an optical quality mirror. This research can lead to an increase in mirror mounting support complexity in the manufacturing of lightweight mirrors and improve overall process efficiency. The project aspired to have many future applications of light weighted 3D printed mirrors, such as spaceflight. This paper covers the design/fab/polish/test of 3D printed mirrors, thermal/structural finite element analysis, and results.

  5. Compression of 3D integral images using wavelet decomposition

    Science.gov (United States)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  6. 3D flash lidar imager onboard UAV

    Science.gov (United States)

    Zhou, Guoqing; Liu, Yilong; Yang, Jiazhi; Zhang, Rongting; Su, Chengjie; Shi, Yujun; Zhou, Xiang

    2014-11-01

    A new generation of flash LiDAR sensor called GLidar-I is presented in this paper. The GLidar-I has been being developed by Guilin University of Technology in cooperating with the Guilin Institute of Optical Communications. The GLidar-I consists of control and process system, transmitting system and receiving system. Each of components has been designed and implemented. The test, experiments and validation for each component have been conducted. The experimental results demonstrate that the researched and developed GLiDAR-I can effectively measure the distance about 13 m at the accuracy level about 11cm in lab.

  7. Highway 3D model from image and lidar data

    Science.gov (United States)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  8. 3D imaging of neutron tracks using confocal microscopy

    Science.gov (United States)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  9. 3D optical manipulation of a single electron spin

    CERN Document Server

    Geiselmann, Michael; Renger, Jan; Say, Jana M; Brown, Louise J; de Abajo, F Javier García; Koppens, Frank; Quidant, Romain

    2013-01-01

    Nitrogen vacancy (NV) centers in diamond are promising elemental blocks for quantum optics [1, 2], spin-based quantum information processing [3, 4], and high-resolution sensing [5-13]. Yet, fully exploiting these capabilities of single NV centers requires strategies to accurately manipulate them. Here, we use optical tweezers as a tool to achieve deterministic trapping and 3D spatial manipulation of individual nano-diamonds hosting a single NV spin. Remarkably, we find the NV axis is nearly fixed inside the trap and can be controlled in-situ, by adjusting the polarization of the trapping light. By combining this unique spatial and angular control with coherent manipulation of the NV spin and fluorescent lifetime measurements near an integrated photonic system, we prove optically trapped NV center as a novel route for both 3D vectorial magnetometry and sensing of the local density of optical states.

  10. Acoustic 3D imaging of dental structures

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  11. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  12. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance.

    Science.gov (United States)

    Dibildox, Gerardo; Baka, Nora; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro; van Walsum, Theo

    2014-09-01

    The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P>0.1) but did improve robustness with regards to the initialization of the 3D models. The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  13. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  14. Long-range and wide field of view optical coherence tomography for in vivo 3D imaging of large volume object based on akinetic programmable swept source.

    Science.gov (United States)

    Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K

    2016-11-01

    Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm(2). The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm(2)). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications.

  15. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  16. Morphometrics, 3D Imaging, and Craniofacial Development

    Science.gov (United States)

    Hallgrimsson, Benedikt; Percival, Christopher J.; Green, Rebecca; Young, Nathan M.; Mio, Washington; Marcucio, Ralph

    2017-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  17. Focusing optics of a parallel beam CCD optical tomography apparatus for 3D radiation gel dosimetry.

    Science.gov (United States)

    Krstajić, Nikola; Doran, Simon J

    2006-04-21

    Optical tomography of gel dosimeters is a promising and cost-effective avenue for quality control of radiotherapy treatments such as intensity-modulated radiotherapy (IMRT). Systems based on a laser coupled to a photodiode have so far shown the best results within the context of optical scanning of radiosensitive gels, but are very slow ( approximately 9 min per slice) and poorly suited to measurements that require many slices. Here, we describe a fast, three-dimensional (3D) optical computed tomography (optical-CT) apparatus, based on a broad, collimated beam, obtained from a high power LED and detected by a charged coupled detector (CCD). The main advantages of such a system are (i) an acquisition speed approximately two orders of magnitude higher than a laser-based system when 3D data are required, and (ii) a greater simplicity of design. This paper advances our previous work by introducing a new design of focusing optics, which take information from a suitably positioned focal plane and project an image onto the CCD. An analysis of the ray optics is presented, which explains the roles of telecentricity, focusing, acceptance angle and depth-of-field (DOF) in the formation of projections. A discussion of the approximation involved in measuring the line integrals required for filtered backprojection reconstruction is given. Experimental results demonstrate (i) the effect on projections of changing the position of the focal plane of the apparatus, (ii) how to measure the acceptance angle of the optics, and (iii) the ability of the new scanner to image both absorbing and scattering gel phantoms. The quality of reconstructed images is very promising and suggests that the new apparatus may be useful in a clinical setting for fast and accurate 3D dosimetry.

  18. SU-E-T-294: Simulations to Investigate the Feasibility of ‘dry’ Optical-CT Imaging for 3D Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Chisholm, K [Duke University, Durham, NC (United States); Rankine, L [Washington University, Saint Louis, MO (United States); Oldham, M [Duke University Medical Center, Durham, NC (United States)

    2014-06-01

    Purpose: To perform simulations investigating the feasibility of “dry” optical-CT, and determine optimal design and scanning parameters for a novel dry tank telecentric optical-CT 3D dosimetry system. Such a system would have important advantages in terms of practical convenience and reduced cost. Methods: A Matlab based ray-tracing simulation platform, ScanSim, was used to model a telecentric system with a polyurethane dry tank, cylindrical dosimeter, and surrounding fluid. This program's capabilities were expanded for the geometry and physics of dry scanning. To categorize the effects of refractive index (RI) mismatches, simulations were run for several dosimeter (RI = 1.5−1.48) and fluid (RI = 1.55−1.33) combinations. Additional simulations examined the effect of increasing gap size (1–5mm) between the dosimeter and tank wall, and of changing the telecentric lens tolerance (0.5°−5°). The evaluation metric is the usable radius; the distance from the dosimeter center where the measured and true doses differ by less than 2%. Results: As the tank/dosimeter RI mismatch increases from 0–0.02, the usable radius decreases from 97.6% to 50.2%. The fluid RI for matching is lower than either the tank or dosimeter RI. Changing gap sizes has drastic effects on the usable radius, requiring more closely matched fluid at large gap sizes. Increasing the telecentric tolerance through a range from 0.5°–5.0° improved the usable radius for every combination of media. Conclusion: Dry optical-CT with telecentric lenses is feasible when the dosimeter and tank RIs are closely matched (<0.01 difference), or when data in the periphery is not required. The ScanSim tool proved very useful in situations when the tank and dosimeter have slight differences in RI by enabling estimation of the optimal choice of RI of the small amount of fluid still required. Some spoiling of the telecentric beam and increasing the tolerance helps recover the usable radius.

  19. 3D Motion Parameters Determination Based on Binocular Sequence Images

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  20. 3D Shape Indexing and Retrieval Using Characteristics level images

    Directory of Open Access Journals (Sweden)

    Abdelghni Lakehal

    2012-05-01

    Full Text Available In this paper, we propose an improved version of the descriptor that we proposed before. The descriptor is based on a set of binary images extracted from the 3D model called level images noted LI. The set LI is often bulky, why we introduced the X-means technique to reduce its size instead of K-means used in the old version. A 2D binary image descriptor was introduced to extract the vectors descriptors of the 3D model. For a comparative study of two versions of the descriptor, we used the National Taiwan University (NTU database of 3D object.

  1. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    Science.gov (United States)

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  2. MARVIN : high speed 3D imaging for seedling classification

    NARCIS (Netherlands)

    Koenderink, N.J.J.P.; Wigham, M.L.I.; Golbach, F.B.T.F.; Otten, G.W.; Gerlich, R.J.H.; Zedde, van de H.J.

    2009-01-01

    The next generation of automated sorting machines for seedlings demands 3D models of the plants to be made at high speed and with high accuracy. In our system the 3D plant model is created based on the information of 24 RGB cameras. Our contribution is an image acquisition technique based on

  3. 3D manipulation with a scanning near field optical nanotweezers

    CERN Document Server

    Berthelot, J; Juan, M L; Kreuzer, M P; Renger, J; Quidant, R

    2013-01-01

    Recent advances in Nanotechnologies have prompted the need for tools to accurately and non-invasively manipulate individual nano-objects. Among possible strategies, optical forces have been foreseen to provide researchers with nano-optical tweezers capable to trap a specimen and move it in 3D. In practice though, the combination of weak optical forces involved and photothermal issues have thus far prevented their experimental realization. Here, we demonstrate first 3D optical manipulation of single 50 nm dielectric objects with near field nano-tweezers. The nano-optical trap is built by engineering a bowtie plasmonic aperture at the extremity of a tapered metal-coated optical fiber. Both the trapping operation and monitoring are performed through the optical fiber making these nano-tweezers totally autonomous and free of bulky optical elements. The achieved trapping performances allow for the trapped specimen to be moved over tens of micrometers during several minutes with very low in-trap intensities. This n...

  4. 3D quantitative phase imaging of neural networks using WDT

    Science.gov (United States)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  5. Study on portable optical 3D coordinate measuring system

    Science.gov (United States)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  6. Advanced optical 3D scanners using DMD technology

    Science.gov (United States)

    Muenstermann, P.; Godding, R.; Hermstein, M.

    2017-02-01

    Optical 3D measurement techniques are state-of-the-art for highly precise, non-contact surface scanners - not only in industrial development, but also in near-production and even in-line configurations. The need for automated systems with very high accuracy and clear implementation of national precision standards is growing extremely due to expanding international quality guidelines, increasing production transparency and new concepts related to the demands of the fourth industrial revolution. The presentation gives an overview about the present technical concepts for optical 3D scanners and their benefit for customers and various different applications - not only in quality control, but also in design centers or in medical applications. The advantages of DMD-based systems will be discussed and compared to other approaches. Looking at today's 3D scanner market, there is a confusing amount of solutions varying from lowprice solutions to high end systems. Many of them are linked to a very special target group or to special applications. The article will clarify the differences of the approaches and will discuss some key features which are necessary to render optical measurement systems suitable for industrial environments. The paper will be completed by examples for DMDbased systems, e. g. RGB true-color systems with very high accuracy like the StereoScan neo of AICON 3D Systems. Typical applications and the benefits for customers using such systems are described.

  7. 3D reconstruction, visualization, and measurement of MRI images

    Science.gov (United States)

    Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap

    1999-03-01

    This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.

  8. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  9. A colour image reproduction framework for 3D colour printing

    Science.gov (United States)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  10. Parallel computing helps 3D depth imaging, processing

    Energy Technology Data Exchange (ETDEWEB)

    Nestvold, E. O. [IBM, Houston, TX (United States); Su, C. B. [IBM, Dallas, TX (United States); Black, J. L. [Landmark Graphics, Denver, CO (United States); Jack, I. G. [BP Exploration, London (United Kingdom)

    1996-10-28

    The significance of 3D seismic data in the petroleum industry during the past decade cannot be overstated. Having started as a technology too expensive to be utilized except by major oil companies, 3D technology is now routinely used by independent operators in the US and Canada. As with all emerging technologies, documentation of successes has been limited. There are some successes, however, that have been summarized in the literature in the recent past. Key technological developments contributing to this success have been major advances in RISC workstation technology, 3D depth imaging, and parallel computing. This article presents the basic concepts of parallel seismic computing, showing how it impacts both 3D depth imaging and more-conventional 3D seismic processing.

  11. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Directory of Open Access Journals (Sweden)

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  12. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  13. Implementation of 3D Optical Scanning Technology for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Abdil Kuş

    2009-03-01

    Full Text Available Reverse engineering (RE is a powerful tool for generating a CAD model from the 3D scan data of a physical part that lacks documentation or has changed from the original CAD design of the part. The process of digitizing a part and creating a CAD model from 3D scan data is less time consuming and provides greater accuracy than manually measuring the part and designing the part from scratch in CAD. 3D optical scanning technology is one of the measurement methods which have evolved over the last few years and it is used in a wide range of areas from industrial applications to art and cultural heritage. It is also used extensively in the automotive industry for applications such as part inspections, scanning of tools without CAD definition, scanning the casting for definition of the stock (i.e. the amount of material to be removed from the surface of the castings model for CAM programs and reverse engineering. In this study two scanning experiments of automotive applications are illustrated. The first one examines the processes from scanning to re-manufacturing the damaged sheet metal cutting die, using a 3D scanning technique and the second study compares the scanned point clouds data to 3D CAD data for inspection purposes. Furthermore, the deviations of the part holes are determined by using different lenses and scanning parameters.

  14. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  15. Fully Automatic 3D Reconstruction of Histological Images

    CERN Document Server

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  16. Lossless Compression of Medical Images Using 3D Predictors.

    Science.gov (United States)

    Lucas, Luis; Rodrigues, Nuno; Cruz, Luis; Faria, Sergio

    2017-06-09

    This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3D-MRP, is based on the principle of minimum rate predictors (MRP), which is one of the state-of-the-art lossless compression technologies, presented in the data compression literature. The main features of the proposed method include the use of 3D predictors, 3D-block octree partitioning and classification, volume-based optimisation and support for 16 bit-depth images. Experimental results demonstrate the efficiency of the 3D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8 bit and 16 bit-depth contents, respectively, when compared to JPEG-LS, JPEG2000, CALIC, HEVC, as well as other proposals based on MRP algorithm.

  17. Continuous Optical 3D Printing of Green Aliphatic Polyurethanes.

    Science.gov (United States)

    Pyo, Sang-Hyun; Wang, Pengrui; Hwang, Henry H; Zhu, Wei; Warner, John; Chen, Shaochen

    2017-01-11

    Photosensitive diurethanes were prepared from a green chemistry synthesis pathway based on methacrylate-functionalized six-membered cyclic carbonate and biogenic amines. A continuous optical 3D printing method for the diurethanes was developed to create user-defined gradient stiffness and smooth complex surface microstructures in seconds. The green chemistry-derived polyurethane (gPU) showed high optical transparency, and we demonstrate the ability to tune the material stiffness of the printed structure along a gradient by controlling the exposure time and selecting various amine compounds. High-resolution 3D biomimetic structures with smooth curves and complex contours were printed using our gPU. High cell viability (over 95%) was demonstrated during cytocompatibility testing using C3H 10T1/2 cells seeded directly on the printed structures.

  18. Changes in quantitative 3D shape features of the optic nerve head associated with age

    Science.gov (United States)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p glaucoma, disease progression and outcomes, and genetic factors.

  19. DCT and DST Based Image Compression for 3D Reconstruction

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  20. Optical Sensors and Methods for Underwater 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Miquel Massot-Campos

    2015-12-01

    Full Text Available This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered.

  1. Military efforts in nanosensors, 3D printing, and imaging detection

    Science.gov (United States)

    Edwards, Eugene; Booth, Janice C.; Roberts, J. Keith; Brantley, Christina L.; Crutcher, Sihon H.; Whitley, Michael; Kranz, Michael; Seif, Mohamed; Ruffin, Paul

    2017-04-01

    A team of researchers and support organizations, affiliated with the Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC), has initiated multidiscipline efforts to develop nano-based structures and components for advanced weaponry, aviation, and autonomous air/ground systems applications. The main objective of this research is to exploit unique phenomena for the development of novel technology to enhance warfighter capabilities and produce precision weaponry. The key technology areas that the authors are exploring include nano-based sensors, analysis of 3D printing constituents, and nano-based components for imaging detection. By integrating nano-based devices, structures, and materials into weaponry, the Army can revolutionize existing (and future) weaponry systems by significantly reducing the size, weight, and cost. The major research thrust areas include the development of carbon nanotube sensors to detect rocket motor off-gassing; the application of current methodologies to assess materials used for 3D printing; and the assessment of components to improve imaging seekers. The status of current activities, associated with these key areas and their implementation into AMRDEC's research, is outlined in this paper. Section #2 outlines output data, graphs, and overall evaluations of carbon nanotube sensors placed on a 16 element chip and exposed to various environmental conditions. Section #3 summarizes the experimental results of testing various materials and resulting components that are supplementary to additive manufacturing/fused deposition modeling (FDM). Section #4 recapitulates a preliminary assessment of the optical and electromechanical components of seekers in an effort to propose components and materials that can work more effectively.

  2. Open-source 3D-printable optics equipment.

    Science.gov (United States)

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  3. Open-source 3D-printable optics equipment.

    Directory of Open Access Journals (Sweden)

    Chenlong Zhang

    Full Text Available Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  4. Validation of optical codes based on 3D nanostructures

    Science.gov (United States)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  5. Large optical 3D MEMS switches in access networks

    Science.gov (United States)

    Madamopoulos, Nicholas; Kaman, Volkan; Yuan, Shifu; Jerphagnon, Olivier; Helkey, Roger; Bowers, John E.

    2007-09-01

    Interest is high among residential customers and businesses for advanced, broadband services such as fast Internet access, electronic commerce, video-on-demand, digital broadcasting, teleconferencing and telemedicine. In order to satisfy such growing demand of end-customers, access technologies such as fiber-to-the-home/building (FTTH/B) are increasingly being deployed. Carriers can reduce maintenance costs, minimize technology obsolescence and introduce new services easily by reducing active elements in the fiber access network. However, having a passive optical network (PON) also introduces operational and maintenance challenges. Increased diagnostic monitoring capability of the network becomes a necessity as more and more fibers are provisioned to deliver services to the end-customers. This paper demonstrates the clear advantages that large 3D optical MEMS switches offer in solving these access network problems. The advantages in preventative maintenance, remote monitoring, test and diagnostic capability are highlighted. The low optical insertion loss for all switch optical connections of the switch enables the monitoring, grooming and serving of a large number of PON lines and customers. Furthermore, the 3D MEMS switch is transparent to optical wavelengths and data formats, thus making it easy to incorporate future upgrades, such higher bit rates or DWDM overlay to a PON.

  6. Total body irradiation with a compensator fabricated using a 3D optical scanner and a 3D printer

    Science.gov (United States)

    Park, So-Yeon; Kim, Jung-in; Joo, Yoon Ha; Lee, Jung Chan; Park, Jong Min

    2017-05-01

    We propose bilateral total body irradiation (TBI) utilizing a 3D printer and a 3D optical scanner. We acquired surface information of an anthropomorphic phantom with the 3D scanner and fabricated the 3D compensator with the 3D printer, which could continuously compensate for the lateral missing tissue of an entire body from the beam’s eye view. To test the system’s performance, we measured doses with optically stimulated luminescent dosimeters (OSLDs) as well as EBT3 films with the anthropomorphic phantom during TBI without a compensator, conventional bilateral TBI, and TBI with the 3D compensator (3D TBI). The 3D TBI showed the most uniform dose delivery to the phantom. From the OSLD measurements of the 3D TBI, the deviations between the measured doses and the prescription dose ranged from  -6.7% to 2.4% inside the phantom and from  -2.3% to 0.6% on the phantom’s surface. From the EBT3 film measurements, the prescription dose could be delivered to the entire body of the phantom within  ±10% accuracy, except for the chest region, where tissue heterogeneity is extreme. The 3D TBI doses were much more uniform than those of the other irradiation techniques, especially in the anterior-to-posterior direction. The 3D TBI was advantageous, owing to its uniform dose delivery as well as its efficient treatment procedure.

  7. Total body irradiation with a compensator fabricated using a 3D optical scanner and a 3D printer.

    Science.gov (United States)

    Park, So-Yeon; Kim, Jung-In; Joo, Yoon Ha; Lee, Jung Chan; Park, Jong Min

    2017-05-07

    We propose bilateral total body irradiation (TBI) utilizing a 3D printer and a 3D optical scanner. We acquired surface information of an anthropomorphic phantom with the 3D scanner and fabricated the 3D compensator with the 3D printer, which could continuously compensate for the lateral missing tissue of an entire body from the beam's eye view. To test the system's performance, we measured doses with optically stimulated luminescent dosimeters (OSLDs) as well as EBT3 films with the anthropomorphic phantom during TBI without a compensator, conventional bilateral TBI, and TBI with the 3D compensator (3D TBI). The 3D TBI showed the most uniform dose delivery to the phantom. From the OSLD measurements of the 3D TBI, the deviations between the measured doses and the prescription dose ranged from  -6.7% to 2.4% inside the phantom and from  -2.3% to 0.6% on the phantom's surface. From the EBT3 film measurements, the prescription dose could be delivered to the entire body of the phantom within  ±10% accuracy, except for the chest region, where tissue heterogeneity is extreme. The 3D TBI doses were much more uniform than those of the other irradiation techniques, especially in the anterior-to-posterior direction. The 3D TBI was advantageous, owing to its uniform dose delivery as well as its efficient treatment procedure.

  8. Building 3D scenes from 2D image sequences

    Science.gov (United States)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  9. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Energy Technology Data Exchange (ETDEWEB)

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  10. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  11. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  12. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  13. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  14. 3D Images of Materials Structures Processing and Analysis

    CERN Document Server

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  15. Visualization and Analysis of 3D Microscopic Images

    Science.gov (United States)

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  16. 3D Medical Image Segmentation Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  17. 3D Image Reconstruction: Determination of Pattern Orientation

    Energy Technology Data Exchange (ETDEWEB)

    Blankenbecler, Richard

    2003-03-13

    The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.

  18. Determining 3D flow fields via multi-camera light field imaging.

    Science.gov (United States)

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-03-06

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

  19. A Texture Analysis of 3D Radar Images

    NARCIS (Netherlands)

    Deiana, D.; Yarovoy, A.

    2009-01-01

    In this paper a texture feature coding method to be applied to high-resolution 3D radar images in order to improve target detection is developed. An automatic method for image segmentation based on texture features is proposed. The method has been able to automatically detect weak targets which fail

  20. Surface Explorations: 3D Moving Images as Cartographies of Time.

    NARCIS (Netherlands)

    Verhoeff, N.

    2016-01-01

    Moving images of travel and exploration have a long history. In this essay I will examine how the trope of navigation in 3D moving images can work towards an intimate and haptic encounter with other times and other places – elsewhen and elsewhere. The particular navigational construction of space in

  1. 3D imaging from theory to practice: the Mona Lisa story

    Science.gov (United States)

    Blais, Francois; Cournoyer, Luc; Beraldin, J.-Angelo; Picard, Michel

    2008-08-01

    The warped poplar panel and the technique developed by Leonardo to paint the Mona Lisa present a unique research and engineering challenge for the design of a complete optical 3D imaging system. This paper discusses the solution developed to precisely measure in 3D the world's most famous painting despite its highly contrasted paint surface and reflective varnish. The discussion focuses on the opto-mechanical design and the complete portable 3D imaging system used for this unique occasion. The challenges associated with obtaining 3D color images at a resolution of 0.05 mm and a depth precision of 0.01 mm are illustrated by exploring the virtual 3D model of the Mona Lisa.

  2. Diattenuation of brain tissue and its impact on 3D polarized light imaging

    Science.gov (United States)

    Menzel, Miriam; Reckfort, Julia; Weigand, Daniel; Köse, Hasan; Amunts, Katrin; Axer, Markus

    2017-01-01

    3D-polarized light imaging (3D-PLI) reconstructs nerve fibers in histological brain sections by measuring their birefringence. This study investigates another effect caused by the optical anisotropy of brain tissue – diattenuation. Based on numerical and experimental studies and a complete analytical description of the optical system, the diattenuation was determined to be below 4 % in rat brain tissue. It was demonstrated that the diattenuation effect has negligible impact on the fiber orientations derived by 3D-PLI. The diattenuation signal, however, was found to highlight different anatomical structures that cannot be distinguished with current imaging techniques, which makes Diattenuation Imaging a promising extension to 3D-PLI. PMID:28717561

  3. 2D/3D Image Registration using Regression Learning.

    Science.gov (United States)

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-09-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.

  4. Medical image segmentation using 3D MRI data

    Science.gov (United States)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  5. Interactive visualization of multiresolution image stacks in 3D.

    Science.gov (United States)

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  6. Imaging and 3D morphological analysis of collagen fibrils.

    Science.gov (United States)

    Altendorf, H; Decencière, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

    2012-08-01

    The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  7. Innovations in 3D printing: a 3D overview from optics to organs.

    Science.gov (United States)

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints.

  8. DISOCCLUSION OF 3D LIDAR POINT CLOUDS USING RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    P. Biasutti

    2017-05-01

    Full Text Available This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS. Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor’s topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  9. Disocclusion of 3d LIDAR Point Clouds Using Range Images

    Science.gov (United States)

    Biasutti, P.; Aujol, J.-F.; Brédif, M.; Bugeau, A.

    2017-05-01

    This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor's topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  10. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    3-D blood flow quantification with high spatial and temporal resolution would strongly benefit clinical research on cardiovascular pathologies. Ultrasonic velocity techniques are known for their ability to measure blood flow with high precision at high spatial and temporal resolution. However......, current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI......) technique is extended to estimate the 3-D velocity components inside a volume at high temporal resolutions (

  11. 3D OPTICAL AND IR SPECTROSCOPY OF EXCEPTIONAL HII GALAXIES

    Directory of Open Access Journals (Sweden)

    E. Telles

    2009-01-01

    Full Text Available In this contribution I will very brie y summarize some recent results obtained applying 3D spectroscopy to observations of the well known HII galaxy II Zw 40, both in the optical and near-IR. I have studied the distribution of the dust in the starburst region, the velocity and velocity dispersion, and the geometry of the molecular hydrogen and ionized gas. I found a clear correlation between the component of the ISM and the velocity eld suggesting that the latter has a fundamental role in de ning the modes of the star formation process.

  12. Optical monitoring of scoliosis by 3D medical laser scanner

    Science.gov (United States)

    Rodríguez-Quiñonez, Julio C.; Sergiyenko, Oleg Yu.; Preciado, Luis C. Basaca; Tyrsa, Vera V.; Gurko, Alexander G.; Podrygalo, Mikhail A.; Lopez, Moises Rivas; Balbuena, Daniel Hernandez

    2014-03-01

    Three dimensional recording of the human body surface or anatomical areas have gained importance in many medical applications. In this paper, our 3D Medical Laser Scanner is presented. It is based on the novel principle of dynamic triangulation. We analyze the method of operation, medical applications, orthopedically diseases as Scoliosis and the most common types of skin to employ the system the most proper way. It is analyzed a group of medical problems related to the application of optical scanning in optimal way. Finally, experiments are conducted to verify the performance of the proposed system and its method uncertainty.

  13. Hybrid wide-field and scanning microscopy for high-speed 3D imaging.

    Science.gov (United States)

    Duan, Yubo; Chen, Nanguang

    2015-11-15

    Wide-field optical microscopy is efficient and robust in biological imaging, but it lacks depth sectioning. In contrast, scanning microscopic techniques, such as confocal microscopy and multiphoton microscopy, have been successfully used for three-dimensional (3D) imaging with optical sectioning capability. However, these microscopic techniques are not very suitable for dynamic real-time imaging because they usually take a long time for temporal and spatial scanning. Here, a hybrid imaging technique combining wide-field microscopy and scanning microscopy is proposed to accelerate the image acquisition process while maintaining the 3D optical sectioning capability. The performance was demonstrated by proof-of-concept imaging experiments with fluorescent beads and zebrafish liver.

  14. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  15. PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    E. Maset

    2017-08-01

    Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  16. Photogrammetric 3d Building Reconstruction from Thermal Images

    Science.gov (United States)

    Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.

    2017-08-01

    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  17. Non-invasive single-shot 3D imaging through a scattering layer using speckle interferometry

    CERN Document Server

    Somkuwar, Atul S; R., Vinu; Park, Yongkeun; Singh, Rakesh Kumar

    2015-01-01

    Optical imaging through complex scattering media is one of the major technical challenges with important applications in many research fields, ranging from biomedical imaging, astronomical telescopy, and spatially multiplex optical communications. Although various approaches for imaging though turbid layer have been recently proposed, they had been limited to two-dimensional imaging. Here we propose and experimentally demonstrate an approach for three-dimensional single-shot imaging of objects hidden behind an opaque scattering layer. We demonstrate that under suitable conditions, it is possible to perform the 3D imaging to reconstruct the complex amplitude of objects situated at different depths.

  18. 3D- VISUALIZATION BY RAYTRACING IMAGE SYNTHESIS ON GPU

    Directory of Open Access Journals (Sweden)

    Al-Oraiqat Anas M.

    2016-06-01

    Full Text Available This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU. The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 % is spent for transfer of data between the central processing unit and GPU for calculations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of calculations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization.

  19. Calibration for 3D imaging with a single-pixel camera

    Science.gov (United States)

    Gribben, Jeremy; Boate, Alan R.; Boukerche, Azzedine

    2017-02-01

    Traditional methods for calibrating structured light 3D imaging systems often suffer from various sources of error. By enabling our projector to both project images as well as capture them using the same optical path, we turn our DMD based projector into a dual-purpose projector and single-pixel camera (SPC). A coarse-to-fine SPC scanning technique based on coded apertures was developed to detect calibration target points with sub-pixel accuracy. Our new calibration approach shows improved depth measurement accuracy when used in structured light 3D imaging by reducing cumulative errors caused by multiple imaging paths.

  20. 3D refractive index measurements of special optical fibers

    Science.gov (United States)

    Yan, Cheng; Huang, Su-Juan; Miao, Zhuang; Chang, Zheng; Zeng, Jun-Zhang; Wang, Ting-Yun

    2016-09-01

    A digital holographic microscopic chromatography-based approach with considerably improved accuracy, simplified configuration and performance stability is proposed to measure three dimensional refractive index of special optical fibers. Based on the approach, a measurement system is established incorporating a modified Mach-Zehnder interferometer and lab-developed supporting software for data processing. In the system, a phase projection distribution of an optical fiber is utilized to obtain an optimal digital hologram recorded by a CCD, and then an angular spectrum theory-based algorithm is adopted to extract the phase distribution information of an object wave. The rotation of the optic fiber enables the experimental measurements of multi-angle phase information. Based on the filtered back projection algorithm, a 3D refraction index of the optical fiber is thus obtained at high accuracy. To evaluate the proposed approach, both PANDA fibers and special elliptical optical fiber are considered in the system. The results measured in PANDA fibers agree well with those measured using S14 Refractive Index Profiler, which is, however, not suitable for measuring the property of a special elliptical fiber.

  1. A physical model eye with 3D resolution test targets for optical coherence tomography

    Science.gov (United States)

    Hu, Zhixiong; Liu, Wenli; Hong, Baoyu; Hao, Bingtao; Wang, Lele; Li, Jiao

    2014-09-01

    Optical coherence tomography (OCT) has been widely employed as non-invasive 3D imaging diagnostic instrument, particularly in the field of ophthalmology. Although OCT has been approved for use in clinic in USA, Europe and Asia, international standardization of this technology is still in progress. Validation of OCT imaging capabilities is considered extremely important to ensure its effective use in clinical diagnoses. Phantom with appropriate test targets can assist evaluate and calibrate imaging performance of OCT at both installation and throughout lifetime of the instrument. In this paper, we design and fabricate a physical model eye with 3D resolution test targets to characterize OCT imaging performance. The model eye was fabricated with transparent resin to simulate realistic ophthalmic testing environment, and most key optical elements including cornea, lens and vitreous body were realized. The test targets which mimic USAF 1951 test chart were fabricated on the fundus of the model eye by 3D printing technology. Differing from traditional two dimensional USAF 1951 test chart, a group of patterns which have different thickness in depth were fabricated. By measuring the 3D test targets, axial resolution as well as lateral resolution of an OCT system can be evaluated at the same time with this model eye. To investigate this specialized model eye, it was measured by a scientific spectral domain OCT instrument and a clinical OCT system respectively. The results demonstrate that the model eye with 3D resolution test targets have the potential of qualitatively and quantitatively validating the performance of OCT systems.

  2. New approach to navigation: matching sequential images to 3D terrain maps

    Science.gov (United States)

    Zhang, Tianxu; Hu, Bo; Li, Wei

    1998-03-01

    In this paper an efficient image matching algorithm is presented for use in aircraft navigation. A sequence images with each two successive images partially overlapped is sensed by a monocular optical system. 3D undulation features are recovered from the image pairs, and then matched against a reference undulation feature map. Finally, the aircraft position is estimated by minimizing Hausdorff distance measure. The simulation experiment using real terrain data is reported.

  3. Integration of real-time 3D image acquisition and multiview 3D display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  4. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  5. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  6. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Science.gov (United States)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  7. Refraction Correction in 3D Transcranial Ultrasound Imaging

    Science.gov (United States)

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  8. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  9. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  10. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very

  11. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  12. 3D Image Fusion to Localise Intercostal Arteries During TEVAR.

    Science.gov (United States)

    Koutouzi, G; Sandström, C; Skoog, P; Roos, H; Falkenberg, M

    2017-01-01

    Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA), but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR). The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT), the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA) and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia.

  13. 3D optical coherence tomography super pixel with machine classifier analysis for glaucoma detection.

    Science.gov (United States)

    Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Schuman, Joel S

    2011-01-01

    Current standard quantitative 3D spectral-domain optical coherence tomography (SD-OCT) analyses of various ocular diseases is limited in detecting structural damage at early pathologic stages. This is mostly because only a small fraction of the 3D data is used in the current method of quantifying the structure of interest. This paper presents a novel SD-OCT data analysis technique, taking full advantage of the 3D dataset. The proposed algorithm uses machine classifier to analyze SD-OCT images after grouping adjacent pixels into super pixel in order to detect glaucomatous damage. A 3D SD-OCT image is first converted into a 2D feature map and partitioned into over a hundred super pixels. Machine classifier analysis using boosting algorithm is performed on super pixel features. One hundred and ninety-two 3D OCT images of the optic nerve head region were tested. Area under the receiver operating characteristic (AUC) was computed to evaluate the glaucoma discrimination performance of the algorithm and compare it to the commercial software output. The AUC of normal vs glaucoma suspect eyes using the proposed method was statistically significantly higher than the current method (0.855 and 0.707, respectively, p=0.031). This new method has the potential to improve early detection of glaucomatous structural damages.

  14. Reconstruction of 3D refractive index profiles of PM PANDA optical fiber using digital holographic method

    Science.gov (United States)

    Wahba, H. H.

    2014-10-01

    In this paper, the refractive indices distributions on the two birefringent axes of polarization maintaining (PM) PANDA type optical fiber are reconstructed. The local refraction of the incident rays crossing the PM optical fiber is considered. Off-axis digital holographic interferometric phase shifting arrangement is employed in this investigation. The recorded mutual phase shifted holograms, starts with 0° with steps of π/4, are combined and numerically reconstructed in the image plane to obtain the optical interference phase map. Consequently, the optical phase differences due to the PM optical fiber are extracted after unwrapping and background subtraction of the enhanced optical interference phase map. The birefringence and the beat length in the two directions, fast and slow axes of PM optical fiber, of polarizations in the core region are calculated. This holographic technique and the advanced analysis of the phase shifting permit the calculation of the 3D refractive index distributions for PM PANDA optical fiber.

  15. 3D image registration using a fast noniterative algorithm.

    Science.gov (United States)

    Zhilkin, P; Alexander, M E

    2000-11-01

    This note describes the implementation of a three-dimensional (3D) registration algorithm, generalizing a previous 2D version [Alexander, Int J Imaging Systems and Technology 1999;10:242-57]. The algorithm solves an integrated form of linearized image matching equation over a set of 3D rectangular sub-volumes ('patches') in the image domain. This integrated form avoids numerical instabilities due to differentiation of a noisy image over a lattice, and in addition renders the algorithm robustness to noise. Registration is implemented by first convolving the unregistered images with a set of computationally fast [O(N)] filters, providing four bandpass images for each input image, and integrating the image matching equation over the given patch. Each filter and each patch together provide an independent set of constraints on the displacement field derived by solving a set of linear regression equations. Furthermore, the filters are implemented at a variety of spatial scales, enabling registration parameters at one scale to be used as an input approximation for deriving refined values of those parameters at a finer scale of resolution. This hierarchical procedure is necessary to avoid false matches occurring. Both downsampled and oversampled (undecimating) filtering is implemented. Although the former is computationally fast, it lacks the translation invariance of the latter. Oversampling is required for accurate interpolation that is used in intermediate stages of the algorithm to reconstruct the partially registered from the unregistered image. However, downsampling is useful, and computationally efficient, for preliminary stages of registration when large mismatches are present. The 3D registration algorithm was implemented using a 12-parameter affine model for the displacement: u(x) = Ax + b. Linear interpolation was used throughout. Accuracy and timing results for registering various multislice images, obtained by scanning a melon and human volunteers in various

  16. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  17. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    Science.gov (United States)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  18. 3D CARS image reconstruction and pattern recognition on SHG images

    Science.gov (United States)

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

  19. Vhrs Stereo Images for 3d Modelling of Buildings

    Science.gov (United States)

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  20. VHRS STEREO IMAGES FOR 3D MODELLING OF BUILDINGS

    Directory of Open Access Journals (Sweden)

    A. Bujakiewicz

    2012-07-01

    Full Text Available The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation – Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control pointsand amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  1. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  2. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    Science.gov (United States)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  3. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    Science.gov (United States)

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  4. Interactive 2D to 3D stereoscopic image synthesis

    Science.gov (United States)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  5. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    Science.gov (United States)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  6. Image Appraisal for 2D and 3D Electromagnetic Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  7. 3D mapping of elastic modulus using shear wave optical micro-elastography

    Science.gov (United States)

    Zhu, Jiang; Qi, Li; Miao, Yusi; Ma, Teng; Dai, Cuixia; Qu, Yueqiao; He, Youmin; Gao, Yiwei; Zhou, Qifa; Chen, Zhongping

    2016-10-01

    Elastography provides a powerful tool for histopathological identification and clinical diagnosis based on information from tissue stiffness. Benefiting from high resolution, three-dimensional (3D), and noninvasive optical coherence tomography (OCT), optical micro-elastography has the ability to determine elastic properties with a resolution of ~10 μm in a 3D specimen. The shear wave velocity measurement can be used to quantify the elastic modulus. However, in current methods, shear waves are measured near the surface with an interference of surface waves. In this study, we developed acoustic radiation force (ARF) orthogonal excitation optical coherence elastography (ARFOE-OCE) to visualize shear waves in 3D. This method uses acoustic force perpendicular to the OCT beam to excite shear waves in internal specimens and uses Doppler variance method to visualize shear wave propagation in 3D. The measured propagation of shear waves agrees well with the simulation results obtained from finite element analysis (FEA). Orthogonal acoustic excitation allows this method to measure the shear modulus in a deeper specimen which extends the elasticity measurement range beyond the OCT imaging depth. The results show that the ARFOE-OCE system has the ability to noninvasively determine the 3D elastic map.

  8. Optimal Point Spread Function Design for 3D Imaging

    Science.gov (United States)

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2015-01-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and super-resolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem – finding the pupil-plane phase pattern that would yield a PSF with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 μm depth of field, and another with an unprecedented 5 μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  9. 3D reconstruction of concave surfaces using polarisation imaging

    Science.gov (United States)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  10. [3D imaging benefits in clinical pratice of orthodontics].

    Science.gov (United States)

    Frèrejouand, Emmanuel

    2016-12-01

    3D imaging possibilities raised up in the last few years in the orthodontic field. In 2016, it can be used for diagnosis improvement and treatment planning by using digital set up combined to CBCT. It is relevant for orthodontic mechanic updating by creating visible or invisible customised appliances. It forms the basis of numerous scientific researches. The author explains the progress 3D imaging brings to diagnosis and clinics but also highlights the requirements it creates. The daily use of these processes in orthodontic clinical practices needs to be regulated regarding the benefit/risk ratio and the patient satisfaction. The command of the digital work flow created by these technics requires habits modifications from the orthodontist and his staff. © EDP Sciences, SFODF, 2016.

  11. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Science.gov (United States)

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  12. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenges...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....

  13. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    Science.gov (United States)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  14. 3D reconstruction of multiple stained histology images

    Directory of Open Access Journals (Sweden)

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  15. Discrete Method of Images for 3D Radio Propagation Modeling

    Science.gov (United States)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  16. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    Science.gov (United States)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  17. Automatic 3-D Optical Detection on Orientation of Randomly Oriented Industrial Parts for Rapid Robotic Manipulation

    Directory of Open Access Journals (Sweden)

    Liang-Chia Chen

    2012-12-01

    Full Text Available This paper proposes a novel method employing a developed 3-D optical imaging and processing algorithm for accurate classification of an object’s surface characteristics in robot pick and place manipulation. In the method, 3-D geometry of industrial parts can be rapidly acquired by the developed one-shot imaging optical probe based on Fourier Transform Profilometry (FTP by using digital-fringe projection at a camera’s maximum sensing speed. Following this, the acquired range image can be effectively segmented into three surface types by classifying point clouds based on the statistical distribution of the normal surface vector of each detected 3-D point, and then the scene ground is reconstructed by applying least squares fitting and classification algorithms. Also, a recursive search process incorporating the region-growing algorithm for registering homogeneous surface regions has been developed. When the detected parts are randomly overlapped on a workbench, a group of defined 3-D surface features, such as surface areas, statistical values of the surface normal distribution and geometric distances of defined features, can be uniquely recognized for detection of the part’s orientation. Experimental testing was performed to validate the feasibility of the developed method for real robotic manipulation.

  18. Basic theory on surface measurement uncertainty of 3D imaging systems

    Science.gov (United States)

    Beraldin, J. Angelo

    2009-01-01

    Three-dimensional (3D) imaging systems are now widely available, but standards, best practices and comparative data have started to appear only in the last 10 years or so. The need for standards is mainly driven by users and product developers who are concerned with 1) the applicability of a given system to the task at hand (fit-for-purpose), 2) the ability to fairly compare across instruments, 3) instrument warranty issues, 4) costs savings through 3D imaging. The evaluation and characterization of 3D imaging sensors and algorithms require the definition of metric performance. The performance of a system is usually evaluated using quality parameters such as spatial resolution/uncertainty/accuracy and complexity. These are quality parameters that most people in the field can agree upon. The difficulty arises from defining a common terminology and procedures to quantitatively evaluate them though metrology and standards definitions. This paper reviews the basic principles of 3D imaging systems. Optical triangulation and time delay (timeof- flight) measurement systems were selected to explain the theoretical and experimental strands adopted in this paper. The intrinsic uncertainty of optical distance measurement techniques, the parameterization of a 3D surface and systematic errors are covered. Experimental results on a number of scanners (Surphaser®, HDS6000®, Callidus CPW 8000®, ShapeGrabber® 102) support the theoretical descriptions.

  19. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Science.gov (United States)

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  20. 3D Time-lapse Imaging and Quantification of Mitochondrial Dynamics

    Science.gov (United States)

    Sison, Miguel; Chakrabortty, Sabyasachi; Extermann, Jérôme; Nahas, Amir; James Marchand, Paul; Lopez, Antonio; Weil, Tanja; Lasser, Theo

    2017-02-01

    We present a 3D time-lapse imaging method for monitoring mitochondrial dynamics in living HeLa cells based on photothermal optical coherence microscopy and using novel surface functionalization of gold nanoparticles. The biocompatible protein-based biopolymer coating contains multiple functional groups which impart better cellular uptake and mitochondria targeting efficiency. The high stability of the gold nanoparticles allows continuous imaging over an extended time up to 3000 seconds without significant cell damage. By combining temporal autocorrelation analysis with a classical diffusion model, we quantify mitochondrial dynamics and cast these results into 3D maps showing the heterogeneity of diffusion parameters across the whole cell volume.

  1. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Science.gov (United States)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  2. Quantitative 3D imaging of whole, unstained cells by using X-ray diffraction microscopy.

    Science.gov (United States)

    Jiang, Huaidong; Song, Changyong; Chen, Chien-Chun; Xu, Rui; Raines, Kevin S; Fahimian, Benjamin P; Lu, Chien-Hung; Lee, Ting-Kuo; Nakashima, Akio; Urano, Jun; Ishikawa, Tetsuya; Tamanoi, Fuyuhiko; Miao, Jianwei

    2010-06-22

    Microscopy has greatly advanced our understanding of biology. Although significant progress has recently been made in optical microscopy to break the diffraction-limit barrier, reliance of such techniques on fluorescent labeling technologies prohibits quantitative 3D imaging of the entire contents of cells. Cryoelectron microscopy can image pleomorphic structures at a resolution of 3-5 nm, but is only applicable to thin or sectioned specimens. Here, we report quantitative 3D imaging of a whole, unstained cell at a resolution of 50-60 nm by X-ray diffraction microscopy. We identified the 3D morphology and structure of cellular organelles including cell wall, vacuole, endoplasmic reticulum, mitochondria, granules, nucleus, and nucleolus inside a yeast spore cell. Furthermore, we observed a 3D structure protruding from the reconstructed yeast spore, suggesting the spore germination process. Using cryogenic technologies, a 3D resolution of 5-10 nm should be achievable by X-ray diffraction microscopy. This work hence paves a way for quantitative 3D imaging of a wide range of biological specimens at nanometer-scale resolutions that are too thick for electron microscopy.

  3. Feature detection on 3D images of dental imprints

    Science.gov (United States)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  4. 3D Lunar Terrain Reconstruction from Apollo Images

    Science.gov (United States)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  5. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  6. UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING

    Directory of Open Access Journals (Sweden)

    I. Sarakinou

    2016-06-01

    Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  7. Extraction of depth information for 3D imaging using pixel aperture technique

    Science.gov (United States)

    Choi, Byoung-Soo; Bae, Myunghan; Kim, Sang-Hwan; Lee, Jimin; Oh, Chang-Woo; Chang, Seunghyuk; Park, JongHo; Lee, Sang-Jin; Shin, Jang-Kyoo

    2017-02-01

    A 3dimensional (3D) imaging is an important area which can be applied to face detection, gesture recognition, and 3D reconstruction. In this paper, extraction of depth information for 3D imaging using pixel aperture technique is presented. An active pixel sensor (APS) with in-pixel aperture has been developed for this purpose. In the conventional camera systems using a complementary metal-oxide-semiconductor (CMOS) image sensor, an aperture is located behind the camera lens. However, in our proposed camera system, the aperture implemented by metal layer of CMOS process is located on the White (W) pixel which means a pixel without any color filter on top of the pixel. 4 types of pixels including Red (R), Green (G), Blue (B), and White (W) pixels were used for pixel aperture technique. The RGB pixels produce a defocused image with blur, while W pixels produce a focused image. The focused image is used as a reference image to extract the depth information for 3D imaging. This image can be compared with the defocused image from RGB pixels. Therefore, depth information can be extracted by comparing defocused image with focused image using the depth from defocus (DFD) method. Size of the pixel for 4-tr APS is 2.8 μm × 2.8 μm and the pixel structure was designed and simulated based on 0.11 μm CMOS image sensor (CIS) process. Optical performances of the pixel aperture technique were evaluated using optical simulation with finite-difference time-domain (FDTD) method and electrical performances were evaluated using TCAD.

  8. Cordless hand-held optical 3D sensor

    Science.gov (United States)

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  9. Mapping 3D fiber orientation in tissue using dual-angle optical polarization tractography.

    Science.gov (United States)

    Wang, Y; Ravanfar, M; Zhang, K; Duan, D; Yao, G

    2016-10-01

    Optical polarization tractography (OPT) has recently been applied to map fiber organization in the heart, skeletal muscle, and arterial vessel wall with high resolution. The fiber orientation measured in OPT represents the 2D projected fiber angle in a plane that is perpendicular to the incident light. We report here a dual-angle extension of the OPT technology to measure the actual 3D fiber orientation in tissue. This method was first verified by imaging the murine extensor digitorum muscle placed at various known orientations in space. The accuracy of the method was further studied by analyzing the 3D fiber orientation of the mouse tibialis anterior muscle. Finally we showed that dual-angle OPT successfully revealed the unique 3D "arcade" fiber structure in the bovine articular cartilage.

  10. Low cost 3D scanning process using digital image processing

    Science.gov (United States)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  11. Effective classification of 3D image data using partitioning methods

    Science.gov (United States)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  12. Feature issue of digital holography and 3D imaging (DH): introduction.

    Science.gov (United States)

    Hayasaki, Yoshio; Liu, Jung-Ping; Georges, Marc

    2015-01-01

    The OSA Topical Meeting "Digital Holography and 3D Imaging (DH)" was held in Seattle, Washington, 13-17 July 2014. Feature issues based on the DH meeting series have been released by Applied Optics (AO) since 2007. In this year (2014), Optics Express (OE) and AO jointly decided to have one such feature issue in each journal. The feature issue includes 27 papers and covers a large range of topics, reflecting the rapidly expanding techniques and applications of digital holography and 3D imaging. The DH meeting will continue in the future, as expected, and the next meeting is scheduled to be held on 24-28 May 2015, at Shanghai Institute of Optics and Fine Mechanics, Shanghai, China.

  13. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  14. Physically based analysis of deformations in 3D images

    Science.gov (United States)

    Nastar, Chahab; Ayache, Nicholas

    1993-06-01

    We present a physically based deformable model which can be used to track and to analyze the non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the natural lengths of the springs is set equal to zero, and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, whatever the amplitude of the deformation. It provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track, and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images.

  15. Ultra-realistic 3-D imaging based on colour holography

    Science.gov (United States)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  16. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    Science.gov (United States)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  17. A Jones matrix formalism for simulating 3D Polarised Light Imaging of brain tissue

    CERN Document Server

    Menzel, Miriam; De Raedt, Hans; Reckfort, Julia; Amunts, Katrin; Axer, Markus

    2015-01-01

    The neuroimaging technique 3D Polarised Light Imaging (3D-PLI) provides a high-resolution reconstruction of nerve fibres in human post-mortem brains. The orientations of the fibres are derived from birefringence measurements of histological brain sections assuming that the nerve fibres - consisting of an axon and a surrounding myelin sheath - are uniaxial birefringent and that the measured optic axis is oriented in direction of the nerve fibres (macroscopic model). Although experimental studies support this assumption, the molecular structure of the myelin sheath suggests that the birefringence of a nerve fibre can be described more precisely by multiple optic axes oriented radially around the fibre axis (microscopic model). In this paper, we compare the use of the macroscopic and the microscopic model for simulating 3D-PLI by means of the Jones matrix formalism. The simulations show that the macroscopic model ensures a reliable estimation of the fibre orientations as long as the polarimeter does not resolve ...

  18. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  19. Configurable Input Devices for 3D Interaction using Optical Tracking

    NARCIS (Netherlands)

    Rhijn, A.J. van

    2007-01-01

    Three-dimensional interaction with virtual objects is one of the aspects that needs to be addressed in order to increase the usability and usefulness of virtual reality. Human beings have difficulties understanding 3D spatial relationships and manipulating 3D user interfaces, which require the contr

  20. 3D printing of tissue-simulating phantoms for calibration of biomedical optical devices

    Science.gov (United States)

    Zhao, Zuhua; Zhou, Ximing; Shen, Shuwei; Liu, Guangli; Yuan, Li; Meng, Yuquan; Lv, Xiang; Shao, Pengfei; Dong, Erbao; Xu, Ronald X.

    2016-10-01

    Clinical utility of many biomedical optical devices is limited by the lack of effective and traceable calibration methods. Optical phantoms that simulate biological tissues used for optical device calibration have been explored. However, these phantoms can hardly simulate both structural and optical properties of multi-layered biological tissue. To address this limitation, we develop a 3D printing production line that integrates spin coating, light-cured 3D printing and Fused Deposition Modeling (FDM) for freeform fabrication of optical phantoms with mechanical and optical heterogeneities. With the gel wax Polydimethylsiloxane (PDMS), and colorless light-curable ink as matrix materials, titanium dioxide (TiO2) powder as the scattering ingredient, graphite powder and black carbon as the absorption ingredient, a multilayer phantom with high-precision is fabricated. The absorption and scattering coefficients of each layer are measured by a double integrating sphere system. The results demonstrate that the system has the potential to fabricate reliable tissue-simulating phantoms to calibrate optical imaging devices.

  1. Fast, high-resolution 3D dosimetry utilizing a novel optical-CT scanner incorporating tertiary telecentric collimation

    OpenAIRE

    Sakhalkar, H. S.; Oldham, M

    2008-01-01

    This study introduces a charge coupled device (CCD) area detector based optical-computed tomography (optical-CT) scanner for comprehensive verification of radiation dose distributions recorded in nonscattering radiochromic dosimeters. Defining characteristics include: (i) a very fast scanning time of ~5 min to acquire a complete three-dimensional (3D) dataset, (ii) improved image formation through the use of custom telecentric optics, which ensures accurate projection images and minimizes art...

  2. Quasi-3D electron cyclotron emission imaging on J-TEXT

    Science.gov (United States)

    Zhao, Zhenling; Zhu, Yilun; Tong, Li; Xie, Jinlin; Liu, Wandong; Yu, Changxuan; Yang, Zhoujun; Zhuang, Ge; Luhmann, N. C., Jr.; Domier, C. W.

    2017-09-01

    Electron cyclotron emission imaging (ECEI) can provide measurements of 2D electron temperature fluctuation with high temporal and spatial resolution in magnetic fusion plasma devices. Two ECEI systems located in different toroidal ports with 67.5 degree separation have been implemented on J-TEXT to study the 3D structure of magnetohydrodynamic (MHD) instabilities. Each system consists of 12 (vertical) × 16 (horizontal) = 192 channels and the image of the 2nd harmonic X-mode electron cyclotron emission can be captured continuously in the core plasma region. The field curvature adjustment lens concept is developed to control the imaging plane for receiving optics of the ECEI systems. Field curvature of the image can be controlled to match the emission layer. Consequently, a quasi-3D image of the MHD instability in the core of the plasma has been achieved.

  3. Experiments on terahertz 3D scanning microscopic imaging

    Science.gov (United States)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  4. 3-D visualization and animation technologies in anatomical imaging

    Science.gov (United States)

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  5. Obstacle detection and terrain characterization using optical flow without 3-D reconstruction

    Science.gov (United States)

    Young, Gin-Shu; Hong, Tsai Hong; Herman, Martin; Yang, Jackson C. S.

    1992-11-01

    For many applications in computer vision, it is important to recover range, 3-D motion, and/or scene geometry from a sequence of images. However, there are many robot behaviors which can be achieved by extracting relevant 2-D information from the imagery and using this information directly, without recovery of such information. In this paper, we focus on two behaviors, obstacle avoidance and terrain navigation. A novel method of these two behaviors has been developed without 3-D reconstruction. This approach is often called purposive active vision. A linear relationship, plotted as a line and called a reference flow line, has been found. The difference between a plotted line and the reference flow line can be used to detect discrete obstacles above or below the reference terrain. For terrain characterization, slopes of surface regions can be calculated directly from optical flow. Some error analysis is also done. The main features of this approach are that (1) discrete obstacles are detected directly from 2-D optical flow, no 3-D reconstruction is performed; (2) terrain slopes are also calculated from 2- D optical flow; (3) knowledge about the terrain model, camera-to-ground coordinate transformation, or vehicle (or camera) motion is not required; (4) the error sources involved are reduced to a minimum, since the only information required is a component of optical flow. An initial experiment using noisy synthetic data is also included to demonstrate the applicability and robustness of the method.

  6. Block matching 3D random noise filtering for absorption optical projection tomography

    Energy Technology Data Exchange (ETDEWEB)

    Fumene Feruglio, P; Vinegoni, C; Weissleder, R [Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, 185 Cambridge Street, Boston, MA 02114 (United States); Gros, J [Department of Genetics, Harvard Medical School, 77 Avenue Louis Pasteur, Boston MA 02115 (United States); Sbarbati, A, E-mail: cvinegoni@mgh.harvard.ed [Department of Morphological and Biomedical Sciences, University of Verona, Strada Le Grazie 8, 37134 Verona (Italy)

    2010-09-21

    Absorption and emission optical projection tomography (OPT), alternatively referred to as optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT), are recently developed three-dimensional imaging techniques with value for developmental biology and ex vivo gene expression studies. The techniques' principles are similar to the ones used for x-ray computed tomography and are based on the approximation of negligible light scattering in optically cleared samples. The optical clearing is achieved by a chemical procedure which aims at substituting the cellular fluids within the sample with a cell membranes' index matching solution. Once cleared the sample presents very low scattering and is then illuminated with a light collimated beam whose intensity is captured in transillumination mode by a CCD camera. Different projection images of the sample are subsequently obtained over a 360{sup 0} full rotation, and a standard backprojection algorithm can be used in a similar fashion as for x-ray tomography in order to obtain absorption maps. Because not all biological samples present significant absorption contrast, it is not always possible to obtain projections with a good signal-to-noise ratio, a condition necessary to achieve high-quality tomographic reconstructions. Such is the case for example, for early stage's embryos. In this work we demonstrate how, through the use of a random noise removal algorithm, the image quality of the reconstructions can be considerably improved even when the noise is strongly present in the acquired projections. Specifically, we implemented a block matching 3D (BM3D) filter applying it separately on each acquired transillumination projection before performing a complete three-dimensional tomographical reconstruction. To test the efficiency of the adopted filtering scheme, a phantom and a real biological sample were processed. In both cases, the BM3D filter led to a signal-to-noise ratio

  7. High Resolution 3D Radar Imaging of Comet Interiors

    Science.gov (United States)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  8. 3D Image Sensor based on Parallax Motion

    Directory of Open Access Journals (Sweden)

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  9. Feature issue of digital holography and 3D imaging (DH) introduction.

    Science.gov (United States)

    Hayasaki, Yoshio; Zhou, Changhe; Popescu, Gabriel; Onural, Levent

    2014-11-17

    The OSA Topical Meeting "Digital Holography and 3D Imaging (DH)," was held in Seattle, Washington, July 13-17, 2014. Feature issues based on the DH meeting series have been released by Applied Optics (AO) since 2007. This year Optics Express (OE) and AO jointly decided to have one such feature issue in each journal. The DH meeting will continue in the future, as expected, and the next meeting is scheduled to be held on 24 - 28 May 2015, in Shanghai Institute of Optics and Fine Mechanics, Shanghai, China.

  10. Optical 3D shape measurement for dynamic process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    3D shape dynamic measurement is essential to the study of machine vision, hydromechanics, high-speed rotation, deformation of material, stress analysis, deformation in impact, explosion process and biomedicine. in recent years. In this paper,the results of our research, including the theoretical analysis, some feasible methods and relevant verifying experiment results, are compendiously reported. At present, these results have been used in our assembling instruments for 3D shape measurement of dynamic process.

  11. Intensity-based image registration for 3D spatial compounding using a freehand 3D ultrasound system

    Science.gov (United States)

    Pagoulatos, Niko; Haynor, David R.; Kim, Yongmin

    2002-04-01

    3D spatial compounding involves the combination of two or more 3D ultrasound (US) data sets, acquired under different insonation angles and windows, to form a higher quality 3D US data set. An important requirement for this method to succeed is the accurate registration between the US images used to form the final compounded image. We have developed a new automatic method for rigid and deformable registration of 3D US data sets, acquired using a freehand 3D US system. Deformation is provided by using a 3D thin-plate spline (TPS). Our method is fundamentally different from the previous ones in that the acquired scattered US 2D slices are registered and compounded directly into the 3D US volume. Our approach has several benefits over the traditional registration and spatial compounding methods: (i) we only peform one 3D US reconstruction, for the first acquired data set, therefore we save the computation time required to reconstruct subsequent acquired scans, (ii) for our registration we use (except for the first scan) the acquired high-resolution 2D US images rather than the 3D US reconstruction data which are of lower quality due to the interpolation and potential subsampling associated with 3D reconstruction, and (iii) the scans performed after the first one are not required to follow the typical 3D US scanning protocol, where a large number of dense slices have to be acquired; slices can be acquired in any fashion in areas where compounding is desired. We show that by taking advantage of the similar information contained in adjacent acquired 2D US slices, we can reduce the computation time of linear and nonlinear registrations by a factor of more than 7:1, without compromising registration accuracy. Furthermore, we implemented an adaptive approximation to the 3D TPS with local bilinear transformations allowing additional reduction of the nonlinear registration computation time by a factor of approximately 3.5. Our results are based on a commercially available

  12. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  13. Research of Fast 3D Imaging Based on Multiple Mode

    Science.gov (United States)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  14. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  15. 2D-3D image registration in diagnostic and interventional X-Ray imaging

    NARCIS (Netherlands)

    Bom, I.M.J. van der

    2010-01-01

    Clinical procedures that are conventionally guided by 2D x-ray imaging, may benefit from the additional spatial information provided by 3D image data. For instance, guidance of minimally invasive procedures with CT or MRI data provides 3D spatial information and visualization of structures that are

  16. Air-structured optical fibre drawn from a 3D-printed preform

    CERN Document Server

    Cook, Kevin; Leon-Saval, Sergio; Reid, Zane; Hossain, Md Arafat; Comatti, Jade-Edouard; Luo, Yanhua; Peng, Gang-Ding

    2016-01-01

    A structured optical fibre is drawn from a 3D-printed structured preform. Preforms containing a single ring of holes around the core are fabricated using filament made from a modified butadiene polymer. More broadly, 3D printers capable of processing soft glasses, silica and other materials are likely to come on line in the not-so distant future. 3D printing of optical preforms signals a new milestone in optical fibre manufacture.

  17. 3D imaging of semiconductor components by discrete laminography

    Energy Technology Data Exchange (ETDEWEB)

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  18. Automatic airline baggage counting using 3D image segmentation

    Science.gov (United States)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  19. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    Science.gov (United States)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  20. Skeletonization algorithm-based blood vessel quantification using in vivo 3D photoacoustic imaging

    Science.gov (United States)

    Meiburger, K. M.; Nam, S. Y.; Chung, E.; Suggs, L. J.; Emelianov, S. Y.; Molinari, F.

    2016-11-01

    Blood vessels are the only system to provide nutrients and oxygen to every part of the body. Many diseases can have significant effects on blood vessel formation, so that the vascular network can be a cue to assess malicious tumor and ischemic tissues. Various imaging techniques can visualize blood vessel structure, but their applications are often constrained by either expensive costs, contrast agents, ionizing radiations, or a combination of the above. Photoacoustic imaging combines the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging, and image contrast depends on optical absorption. This enables the detection of light absorbing chromophores such as hemoglobin with a greater penetration depth compared to purely optical techniques. We present here a skeletonization algorithm for vessel architectural analysis using non-invasive photoacoustic 3D images acquired without the administration of any exogenous contrast agents. 3D photoacoustic images were acquired on rats (n  =  4) in two different time points: before and after a burn surgery. A skeletonization technique based on the application of a vesselness filter and medial axis extraction is proposed to extract the vessel structure from the image data and six vascular parameters (number of vascular trees (NT), vascular density (VD), number of branches (NB), 2D distance metric (DM), inflection count metric (ICM), and sum of angles metric (SOAM)) were calculated from the skeleton. The parameters were compared (1) in locations with and without the burn wound on the same day and (2) in the same anatomic location before (control) and after the burn surgery. Four out of the six descriptors were statistically different (VD, NB, DM, ICM, p  approach to obtain quantitative characterization of the vascular network from 3D photoacoustic images without any exogenous contrast agent which can assess microenvironmental changes related to

  1. Extended volume and surface scatterometer for optical characterization of 3D-printed elements

    Science.gov (United States)

    Dannenberg, Florian; Uebeler, Denise; Weiß, Jürgen; Pescoller, Lukas; Weyer, Cornelia; Hahlweg, Cornelius

    2015-09-01

    The use of 3d printing technology seems to be a promising way for low cost prototyping, not only of mechanical, but also of optical components or systems. It is especially useful in applications where customized equipment repeatedly is subject to immediate destruction, as in experimental detonics and the like. Due to the nature of the 3D-printing process, there is a certain inner texture and therefore inhomogeneous optical behaviour to be taken into account, which also indicates mechanical anisotropy. Recent investigations are dedicated to quantification of optical properties of such printed bodies and derivation of corresponding optimization strategies for the printing process. Beside mounting, alignment and illumination means, also refractive and reflective elements are subject to investigation. The proposed measurement methods are based on an imaging nearfield scatterometer for combined volume and surface scatter measurements as proposed in previous papers. In continuation of last year's paper on the use of near field imaging, which basically is a reflective shadowgraph method, for characterization of glossy surfaces like printed matter or laminated material, further developments are discussed. The device has been extended for observation of photoelasticity effects and therefore homogeneity of polarization behaviour. A refined experimental set-up is introduced. Variation of plane of focus and incident angle are used for separation of various the images of the layers of the surface under test, cross and parallel polarization techniques are applied. Practical examples from current research studies are included.

  2. Needle placement for piriformis injection using 3-D imaging.

    Science.gov (United States)

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  3. Flexible Holographic Fabrication of 3D Photonic Crystal Templates with Polarization Control through a 3D Printed Reflective Optical Element

    Directory of Open Access Journals (Sweden)

    David Lowell

    2016-07-01

    Full Text Available In this paper, we have systematically studied the holographic fabrication of three-dimensional (3D structures using a single 3D printed reflective optical element (ROE, taking advantage of the ease of design and 3D printing of the ROE. The reflective surface was setup at non-Brewster angles to reflect both s- and p-polarized beams for the interference. The wide selection of reflective surface materials and interference angles allow control of the ratio of s- and p-polarizations, and intensity ratio of side-beam to central beam for interference lithography. Photonic bandgap simulations have also indicated that both s and p-polarized waves are sometimes needed in the reflected side beams for maximum photonic bandgap size and certain filling fractions of dielectric inside the photonic crystals. The flexibility of single ROE and single exposure based holographic fabrication of 3D structures was demonstrated with reflective surfaces of ROEs at non-Brewster angles, highlighting the capability of the ROE technique of producing umbrella configurations of side beams with arbitrary angles and polarizations and paving the way for the rapid throughput of various photonic crystal templates.

  4. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  5. Spectral ladar: towards active 3D multispectral imaging

    Science.gov (United States)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  6. Design of 3D isotropic metamaterial device using smart transformation optics.

    Science.gov (United States)

    Shin, Dongheok; Kim, Junhyun; Yoo, Do-Sik; Kim, Kyoungsik

    2015-08-24

    We report here a design method for a 3 dimensional (3D) isotropic transformation optical device using smart transformation optics. Inspired by solid mechanics, smart transformation optics regards a transformation optical medium as an elastic solid and deformations as coordinate transformations. Further developing from our previous work on 2D smart transformation optics, we introduce a method of 3D smart transformation optics to design 3D transformation optical devices by maintaining isotropic materials properties for all types of polarizations imposing free or nearly free boundary conditions. Due to the material isotropy, it is possible to fabricate such devices with structural metamaterials made purely of common dielectric materials. In conclusion, the practical importance of the method reported here lies in the fact that it enables us to fabricate, without difficulty, arbitrarily shaped 3D devices with existing 3D printing technology.

  7. Terahertz imaging system based on bessel beams via 3D printed axicons at 100GHz

    Science.gov (United States)

    Liu, Changming; Wei, Xuli; Zhang, Zhongqi; Wang, Kejia; Yang, Zhenggang; Liu, Jinsong

    2014-11-01

    Terahertz (THz) imaging technology shows great advantage in nondestructive detection (NDT), since many optical opaque materials are transparent to THz waves. In this paper, we design and fabricate dielectric axicons to generate zeroth order-Bessel beams by 3D printing technology. We further present an all-electric THz imaging system using the generated Bessel beams in 100GHz. Resolution targets made of printed circuit board are imaged, and the results clearly show the extended depth of focus of Bessel beam, indicating the promise of Bessel beam for the THz NDT.

  8. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  9. High resolution 3D imaging of synchrotron generated microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  10. Slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow.

    Science.gov (United States)

    Jagannadh, Veerendra Kalyan; Mackenzie, Mark D; Pal, Parama; Kar, Ajoy K; Gorthi, Sai Siva

    2016-09-19

    Three-dimensional cellular imaging techniques have become indispensable tools in biological research and medical diagnostics. Conventional 3D imaging approaches employ focal stack collection to image different planes of the cell. In this work, we present the design and fabrication of a slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow. The approach employs slanted microfluidic channels fabricated in glass using ultrafast laser inscription. The slanted nature of the microfluidic channels ensures that samples come into and go out of focus, as they pass through the microscope imaging field of view. This novel approach enables the collection of focal stacks in a straight-forward and automated manner, even with off-the-shelf microscopes that are not equipped with any motorized translation/rotation sample stages. The presented approach not only simplifies conventional focal stack collection, but also enhances the capabilities of a regular widefield fluorescence microscope to match the features of a sophisticated confocal microscope. We demonstrate the retrieval of sectioned slices of microspheres and cells, with the use of computational algorithms to enhance the signal-to-noise ratio (SNR) in the collected raw images. The retrieved sectioned images have been used to visualize fluorescent microspheres and bovine sperm cell nucleus in 3D while using a regular widefield fluorescence microscope. We have been able to achieve sectioning of approximately 200 slices per cell, which corresponds to a spatial translation of ∼ 15 nm per slice along the optical axis of the microscope.

  11. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    Science.gov (United States)

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-07-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.

  12. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    CERN Document Server

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  13. 3D-printed eagle eye: Compound microlens system for foveated imaging

    Science.gov (United States)

    Thiele, Simon; Arzenbacher, Kathrin; Gissibl, Timo; Giessen, Harald; Herkommer, Alois M.

    2017-01-01

    We present a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor. Our system combines four printed doublet lenses with different focal lengths (equivalent to f = 31 to 123 mm for a 35-mm film) in a 2 × 2 arrangement to achieve a full field of view of 70° with an increasing angular resolution of up to 2 cycles/deg field of view in the center of the image. The footprint of the optics on the chip is below 300 μm × 300 μm, whereas their height is design iterations and can lead to a plethora of different miniaturized multiaperture imaging systems with applications in fields such as endoscopy, optical metrology, optical sensing, surveillance drones, or security. PMID:28246646

  14. 3D Seismic Imaging over a Potential Collapse Structure

    Science.gov (United States)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  15. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    Science.gov (United States)

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  16. Multiframe image point matching and 3-d surface reconstruction.

    Science.gov (United States)

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  17. Ultra wide band millimeter wave holographic ``3-D`` imaging of concealed targets on mannequins

    Energy Technology Data Exchange (ETDEWEB)

    Collins, H.D.; Hall, T.E.; Gribble, R.P. [Pacific Northwest Lab., Richland, WA (United States). Acoustics & Electromagnetic Imaging Group

    1994-08-01

    Ultra wide band (chirp frequency) millimeter wave ``3-D`` holography is a unique technique for imaging concealed targets on human subjects with extremely high lateral and depth resolution. Recent ``3-D`` holographic images of full size mannequins with concealed weapons illustrate the efficacy of this technique for airport security. A chirp frequency (24 GHz to 40 GHz) holographic system was used to construct extremely high resolution images (optical quality) using polyrod antenna in a bi-static configuration using an x-y scanner. Millimeter wave chirp frequency holography can be simply described as a multi-frequency detection and imaging technique where the target`s reflected signals are decomposed into discrete frequency holograms and reconstructed into a single composite ``3-D`` image. The implementation of this technology for security at airports, government installations, etc., will require real-time (video rate) data acquisition and computer image reconstruction of large volumetric data sets. This implies rapid scanning techniques or large, complex ``2-D`` arrays and high speed computing for successful commercialization of this technology.

  18. Generic camera model and its calibration for computational integral imaging and 3D reconstruction.

    Science.gov (United States)

    Li, Weiming; Li, Youfu

    2011-03-01

    Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

  19. A 3-D fluorescence imaging system incorporating structured illumination technology

    Science.gov (United States)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  20. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  1. (HEL MRI) 3D Meta Optics for High Energy Lasers

    Science.gov (United States)

    2016-09-13

    optical communication link using orbital angular momentum multiplexing ." Optics express 24, no. 9 (2016): 9794-9805. 3. Li, Yuan, Wenzhe Li, J. Miller, and...Magnusson, R.; Binun, P.; McCormick, K., "Wavelength Selection and Polarization Multiplexing of Blue Laser Diodes," in Photonics Technology Letters, IEEE...spatial multiplexing can take advantage of a non-Gaussian beam profile. If the components are to be used as out-couplers in bulk lasers , the optics

  2. Optical lens-shift design for increasing spatial resolution of 3D ToF cameras

    Science.gov (United States)

    Lietz, Henrik; Hassan, M. Muneeb; Eberhardt, Jörg

    2017-02-01

    Sensor resolution of 3D time-of-flight (ToF) outdoor-capable cameras is strongly limited because of its large pixel dimensions. Computational imaging permits enhancement of the optical system's resolving power without changing physical sensor properties. Super-resolution (SR) algorithms superimpose several sub-pixel-shifted low-resolution (LR) images to overcome the system's limited spatial sampling rate. In this paper, we propose a novel opto-mechanical system to implement sub-pixel shifts by moving an optical lens. This method is more flexible in terms of implementing SR techniques than current sensor-shift approaches. In addition, we describe a SR observation model that has been optimized for the use of LR 3D ToF cameras. A state-of-the-art iteratively reweighted minimization algorithm executes the SR process. It is proven that our method achieves nearly the same resolution increase as if the pixel area would be halved physically. Resolution enhancement is measured objectively for amplitude images of a static object scene.

  3. Protein 3D Structure Image - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Protein 3D Structure Image Data detail Data name Protein 3D Structure Image DOI 10.189...tory of This Database Site Policy | Contact Us Protein 3D Structure Image - PSCDB | LSDB Archive ...

  4. 3D super-resolution imaging by localization microscopy.

    Science.gov (United States)

    Magenau, Astrid; Gaus, Katharina

    2015-01-01

    Fluorescence microscopy is an important tool in all fields of biology to visualize structures and monitor dynamic processes and distributions. Contrary to conventional microscopy techniques such as confocal microscopy, which are limited by their spatial resolution, super-resolution techniques such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) have made it possible to observe and quantify structure and processes on the single molecule level. Here, we describe a method to image and quantify the molecular distribution of membrane-associated proteins in two and three dimensions with nanometer resolution.

  5. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    Science.gov (United States)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  6. Imaging articular cartilage defects with 3D fat-suppressed echo planar imaging: comparison with conventional 3D fat-suppressed gradient echo sequence and correlation with histology.

    Science.gov (United States)

    Trattnig, S; Huber, M; Breitenseher, M J; Trnka, H J; Rand, T; Kaider, A; Helbich, T; Imhof, H; Resnick, D

    1998-01-01

    Our goal was to shorten examination time in articular cartilage imaging by use of a recently developed 3D multishot echo planar imaging (EPI) sequence with fat suppression (FS). We performed comparisons with 3D FS GE sequence using histology as the standard of reference. Twenty patients with severe gonarthrosis who were scheduled for total knee replacement underwent MRI prior to surgery. Hyaline cartilage was imaged with a 3D FS EPI and a 3D FS GE sequence. Signal intensities of articular structures were measured, and contrast-to-noise (C/N) ratios were calculated. Each knee was subdivided into 10 cartilage surfaces. From a total of 188 (3D EPI sequence) and 198 (3D GE sequence) cartilage surfaces, 73 and 79 histologic specimens could be obtained and analyzed. MR grading of cartilage lesions on both sequences was based on a five grade classification scheme and compared with histologic grading. The 3D FS EPI sequence provided a high C/N ratio between cartilage and subchondral bone similar to that of the 3D FS GE sequence. The C/N ratio between cartilage and effusion was significantly lower on the 3D EPI sequence due to higher signal intensity of fluid. MR grading of cartilage abnormalities using 3D FS EPI and 3D GE sequence correlated well with histologic grading. 3D FS EPI sequence agreed within one grade in 69 of 73 (94.5%) histologically proven cartilage lesions and 3D FS GE sequence agreed within one grade in 76 of 79 (96.2%) lesions. The gradings were identical in 38 of 73 (52.1%) and in 46 of 79 (58.3%) cases, respectively. The difference between the sensitivities was statistically not significant. The 3D FS EPI sequence is comparable with the 3D FS GE sequence in the noninvasive evaluation of advanced cartilage abnormalities but reduces scan time by a factor of 4.

  7. Preliminary examples of 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev

    2013-01-01

    and visualized using three alternative approaches. Practically no in-plane motion (vx and vz) is measured, whereas the out-of-plane motion (vy) and the velocity magnitude exhibit the expected 2D circular-symmetric parabolic shape. It shown that the ultrasound method is suitable for real-time data acquisition...... ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal...... to the center axis of the vessel, which coincides with the y-axis and the flow direction. Hence, only out-of-plane motion is expected. This motion cannot be measured by typical commercial scanners employing 1D arrays. Each frame consists of 16 flow lines steered from -15 to 15 degrees in steps of 2 degrees...

  8. Common-path biodynamic imaging for dynamic fluctuation spectroscopy of 3D living tissue

    Science.gov (United States)

    Li, Zhe; Turek, John; Nolte, David D.

    2017-03-01

    Biodynamic imaging is a novel 3D optical imaging technology based on short-coherence digital holography that measures intracellular motions of cells inside their natural microenvironments. Here both common-path and Mach-Zehnder designs are presented. Biological tissues such as tumor spheroids and ex vivo biopsies are used as targets, and backscattered light is collected as signal. Drugs are applied to samples, and their effects are evaluated by identifying biomarkers that capture intracellular dynamics from the reconstructed holograms. Through digital holography and coherence gating, information from different depths of the samples can be extracted, enabling the deep-tissue measurement of the responses to drugs.

  9. 3D mapping from high resolution satellite images

    Science.gov (United States)

    Goulas, D.; Georgopoulos, A.; Sarakenos, A.; Paraschou, Ch.

    2013-08-01

    In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.

  10. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  11. Parallel Imaging of 3D Surface Profile with Space-Division Multiplexing

    Directory of Open Access Journals (Sweden)

    Hyung Seok Lee

    2016-01-01

    Full Text Available We have developed a modified optical frequency domain imaging (OFDI system that performs parallel imaging of three-dimensional (3D surface profiles by using the space division multiplexing (SDM method with dual-area swept sourced beams. We have also demonstrated that 3D surface information for two different areas could be well obtained in a same time with only one camera by our method. In this study, double field of views (FOVs of 11.16 mm × 5.92 mm were achieved within 0.5 s. Height range for each FOV was 460 µm and axial and transverse resolutions were 3.6 and 5.52 µm, respectively.

  12. An Optically-Assisted 3-D Cellular Array Machine

    Science.gov (United States)

    1993-11-05

    Presented by: Physical Optics Corporation 0 Research & Development Division 20600 Gramercy Place, Suite 103 Torrance, California 90501 Principal...Computer Machine (Constructed Hardware) (Planned Hardware Design) Processing Techniques Digital Only Digital and Analog Analog Processor N/A Celular Neural

  13. 3D spectral imaging system for anterior chamber metrology

    Science.gov (United States)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  14. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    Science.gov (United States)

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  15. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map.

    Science.gov (United States)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D; Sonka, Milan

    2013-12-01

    Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 μm for the 2D and 3D methods, respectively.

  16. Programmable Bidirectional Folding of Metallic Thin Films for 3D Chiral Optical Antennas.

    Science.gov (United States)

    Mao, Yifei; Zheng, Yun; Li, Can; Guo, Lin; Pan, Yini; Zhu, Rui; Xu, Jun; Zhang, Weihua; Wu, Wengang

    2017-03-10

    3D structures with characteristic lengths ranging from nanometer to micrometer scale often exhibit extraordinary optical properties, and have been becoming an extensively explored field for building new generation nanophotonic devices. Albeit a few methods have been developed for fabricating 3D optical structures, constructing 3D structures with nanometer accuracy, diversified materials, and perfect morphology is an extremely challenging task. This study presents a general 3D nanofabrication technique, the focused ion beam stress induced deformation process, which allows a programmable and accurate bidirectional folding (-70°-+90°) of various metal and dielectric thin films. Using this method, 3D helical optical antennas with different handedness, improved surface smoothness, and tunable geometries are fabricated, and the strong optical rotation effects of single helical antennas are demonstrated.

  17. Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera

    Science.gov (United States)

    López-Alba, E.; Felipe-Sesé, L.; Schmeer, S.; Díaz, F. A.

    2016-11-01

    In the current paper, an optical low-cost system for 3D displacement measurement based on a single camera and 3D digital image correlation is presented. The conventional 3D-DIC set-up based on a two-synchronized-cameras system is compared with a proposed pseudo-stereo portable system that employs a mirror system integrated in a device for a straightforward application achieving a novel handle and flexible device for its use in many scenarios. The proposed optical system splits the image by the camera into two stereo images of the object. In order to validate this new approach and quantify its uncertainty compared to traditional 3D-DIC systems, solid rigid in and out-of-plane displacements experiments have been performed and analyzed. The differences between both systems have been studied employing an image decomposition technique which performs a full image comparison. Therefore, results of all field of view are compared with those using a stereoscopy system and 3D-DIC, discussing the accurate results obtained with the proposed device not having influence any distortion or aberration produced by the mirrors. Finally, the adaptability of the proposed system and its accuracy has been tested performing quasi-static and dynamic experiments using a silicon specimen under high deformation. Results have been compared and validated with those obtained from a conventional stereoscopy system showing an excellent level of agreement.

  18. Step-index optical fibre drawn from 3D printed preforms

    CERN Document Server

    CooK, Kevin; Canning, John; Chartier, Loic; Athanaze, Tristan; Hossain, Md Arafat; Han, Chunyang; Comatti, Jade-Edouard; Luo, Yanhua; Peng, Gang-Ding

    2016-01-01

    Optical fibre is drawn from a dual-head 3D printer fabricated preform made of two optically transparent plastics with a high index core (NA ~ 0.25, V > 60). The asymmetry observed in the fibre arises from asymmetry in the 3D printing process. The highly multi-mode optical fibre has losses measured by cut-back as low as {\\alpha} ~ 0.44 dB/cm in the near IR.

  19. 3D automatic segmentation method for retinal optical coherence tomography volume data using boundary surface enhancement

    Directory of Open Access Journals (Sweden)

    Yankui Sun

    2016-03-01

    Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.

  20. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Science.gov (United States)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  1. 3D multicolor super-resolution imaging offers improved accuracy in neuron tracing.

    Directory of Open Access Journals (Sweden)

    Melike Lakadamyali

    Full Text Available The connectivity among neurons holds the key to understanding brain function. Mapping neural connectivity in brain circuits requires imaging techniques with high spatial resolution to facilitate neuron tracing and high molecular specificity to mark different cellular and molecular populations. Here, we tested a three-dimensional (3D, multicolor super-resolution imaging method, stochastic optical reconstruction microscopy (STORM, for tracing neural connectivity using cultured hippocampal neurons obtained from wild-type neonatal rat embryos as a model system. Using a membrane specific labeling approach that improves labeling density compared to cytoplasmic labeling, we imaged neural processes at 44 nm 2D and 116 nm 3D resolution as determined by considering both the localization precision of the fluorescent probes and the Nyquist criterion based on label density. Comparison with confocal images showed that, with the currently achieved resolution, we could distinguish and trace substantially more neuronal processes in the super-resolution images. The accuracy of tracing was further improved by using multicolor super-resolution imaging. The resolution obtained here was largely limited by the label density and not by the localization precision of the fluorescent probes. Therefore, higher image resolution, and thus higher tracing accuracy, can in principle be achieved by further improving the label density.

  2. Optical-CT 3D Dosimetry Using Fresnel Lenses with Minimal Refractive-Index Matching Fluid.

    Directory of Open Access Journals (Sweden)

    Steven Bache

    Full Text Available Telecentric optical computed tomography (optical-CT is a state-of-the-art method for visualizing and quantifying 3-dimensional dose distributions in radiochromic dosimeters. In this work a prototype telecentric system (DFOS-Duke Fresnel Optical-CT Scanner is evaluated which incorporates two substantial design changes: the use of Fresnel lenses (reducing lens costs from $10-30K t0 $1-3K and the use of a 'solid tank' (which reduces noise, and the volume of refractively matched fluid from 1 ltr to 10 cc. The efficacy of DFOS was evaluated by direct comparison against commissioned scanners in our lab. Measured dose distributions from all systems were compared against the predicted dose distributions from a commissioned treatment planning system (TPS. Three treatment plans were investigated including a simple four-field box treatment, a multiple small field delivery, and a complex IMRT treatment. Dosimeters were imaged within 2 h post irradiation, using consistent scanning techniques (360 projections acquired at 1 degree intervals, reconstruction at 2mm. DFOS efficacy was evaluated through inspection of dose line-profiles, and 2D and 3D dose and gamma maps. DFOS/TPS gamma pass rates with 3%/3mm dose difference/distance-to-agreement criteria ranged from 89.3% to 92.2%, compared to from 95.6% to 99.0% obtained with the commissioned system. The 3D gamma pass rate between the commissioned system and DFOS was 98.2%. The typical noise rates in DFOS reconstructions were up to 3%, compared to under 2% for the commissioned system. In conclusion, while the introduction of a solid tank proved advantageous with regards to cost and convenience, further work is required to improve the image quality and dose reconstruction accuracy of the new DFOS optical-CT system.

  3. 3-D neurohistology of transparent tongue in health and injury with optical clearing

    Directory of Open Access Journals (Sweden)

    Tzu-En eHua

    2013-10-01

    Full Text Available Tongue receives extensive innervation to perform taste, sensory, and motor functions. Details of the tongue neuroanatomy and its plasticity in response to injury offer insights to investigate tongue neurophysiology and pathophysiology. However, due to the dispersed nature of the neural network, standard histology cannot provide a global view of the innervation. We prepared transparent mouse tongue by optical clearing to reveal the spatial features of the tongue innervation and its remodeling in injury. Immunostaining of neuronal markers, including PGP9.5 (pan-neuronal marker, calcitonin gene-related peptide (sensory nerves, tyrosine hydroxylase (sympathetic nerves, and vesicular acetylcholine transporter (cholinergic parasympathetic nerves and neuromuscular junctions, was combined with vessel painting and nuclear staining to label the tissue network and architecture. The tongue specimens were immersed in the optical-clearing solution to facilitate photon penetration for 3-dimensiontal (3-D confocal microscopy. Taking advantage of the transparent tissue, we simultaneously revealed the tongue microstructure and innervation with subcellular-level resolution. 3-D projection of the papillary neurovascular complex and taste bud innervation was used to demonstrate the spatial features of tongue mucosa and the panoramic imaging approach. In the tongue injury induced by 4-nitroquinoline 1-oxide administration in the drinking water, we observed neural tissue remodeling in response to the changes of mucosal and muscular structures. Neural networks and the neuromuscular junctions were both found rearranged at the peri-lesional region, suggesting the nerve-lesion interactions in response to injury. Overall, this new tongue histological approach provides a useful tool for 3-D imaging of neural tissues to better characterize their roles with the mucosal and muscular components in health and disease.

  4. Influence of limited random-phase of objects on the image quality of 3D holographic display

    Science.gov (United States)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  5. PSF Rotation with Changing Defocus and Applications to 3D Imaging for Space Situational Awareness

    Science.gov (United States)

    Prasad, S.; Kumar, R.

    2013-09-01

    For a clear, well corrected imaging aperture in space, the point-spread function (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

  6. Superimposing of virtual graphics and real image based on 3D CAD information

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  7. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  8. 3D optical phase reconstruction within PMMA samples using a spectral OCT system

    Science.gov (United States)

    Briones-R., Manuel d. J.; De La Torre-Ibarra, Manuel H.; Mendoza Santoyo, Fernando

    2015-08-01

    The optical coherence tomography (OCT) technique has proved to be a useful method in biomedical areas such as ophthalmology, dentistry, dermatology, among many others. In all these applications the main target is to reconstruct the internal structure of the samples from which the physician's expertise may recognize and diagnose the existence of a disease. Nowadays OCT has been applied one step further and is used to study the mechanics of some particular type of materials, where the resulting information involves more than just their internal structure and the measurement of parameters such as displacements, stress and strain. Here we report on a spectral OCT system used to image the internal 3D microstructure and displacement maps from a PMMA (Poly-methyl-methacrylate) sample, subjected to a deformation by a controlled three point bending and tilting. The internal mechanical response of the polymer is shown as consecutive 2D images.

  9. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    Science.gov (United States)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  10. DETERMINATION OF INTERNAL STRAIN IN 3-D BRAIDED COMPOSITES USING OPTIC FIBER STRAIN SENSORS

    Institute of Scientific and Technical Information of China (English)

    YuanShenfang; HuangRui; LiXianghua; LiuXiaohui

    2004-01-01

    A reliable understanding of the properties of 3-D braided composites is of primary importance for proper utilization of these materials. A new method is introduced to study the mechanical performance of braided composite materials using embedded optic fiber sensors. Experimental research is performed to devise a method of incorporating optic fibers into a 3-D braided composite structure. The efficacy of this new testing method is evaluated on two counts. First,the optical performance of optic fibers is studied before and after incorporated into 3-D braided composites, as well as after completion of the manufacturing process for 3-D braided composites,to validate the ability of the optic fiber to survive the manufacturing process. On the other hand,the influence of incorporated optic fiber on the original braided composite is also researched by tension and compression experiments. Second, two kinds of optic fiber sensors are co-embedded into 3-D braided composites to evaluate their respective ability to measure the internal strain.Experimental results show that multiple optic fiber sensors can be co-braided into 3-D braided composites to determine their internal strain which is difficult to be fulfilled by other current existing methods.

  11. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  12. Application of 3D Morphable Models to faces in video images

    NARCIS (Netherlands)

    van Rootseler, R.T.A.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; van den Biggelaar, Olivier

    2011-01-01

    The 3D Morphable Face Model (3DMM) has been used for over a decade for creating 3D models from single images of faces. This model is based on a PCA model of the 3D shape and texture generated from a limited number of 3D scans. The goal of fitting a 3DMM to an image is to find the model coefficients,

  13. 3D Image Acquisition System Based on Shape from Focus Technique

    Directory of Open Access Journals (Sweden)

    Pierre Gouton

    2013-04-01

    Full Text Available This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene.

  14. Understanding fiber mixture by simulation in 3D Polarized Light Imaging.

    Science.gov (United States)

    Dohmen, Melanie; Menzel, Miriam; Wiese, Hendrik; Reckfort, Julia; Hanke, Frederike; Pietrzyk, Uwe; Zilles, Karl; Amunts, Katrin; Axer, Markus

    2015-05-01

    3D Polarized Light Imaging (3D-PLI) is a neuroimaging technique that has opened up new avenues to study the complex architecture of nerve fibers in postmortem brains. The spatial orientations of the fibers are derived from birefringence measurements of unstained histological brain sections that are interpreted by a voxel-based analysis. This, however, implies that a single fiber orientation vector is obtained for each voxel and reflects the net effect of all comprised fibers. The mixture of various fiber orientations within an individual voxel is a priori not accessible by a standard 3D-PLI measurement. In order to better understand the effects of fiber mixture on the measured 3D-PLI signal and to improve the interpretation of real data, we have developed a simulation method referred to as SimPLI. By means of SimPLI, it is possible to reproduce the entire 3D-PLI analysis starting from synthetic fiber models in user-defined arrangements and ending with measurement-like tissue images. For the simulation, each synthetic fiber is considered as an optical retarder, i.e., multiple fibers within one voxel are described by multiple retarder elements. The investigation of different synthetic crossing fiber arrangements generated with SimPLI demonstrated that the derived fiber orientations are strongly influenced by the relative mixture of crossing fibers. In case of perpendicularly crossing fibers, for example, the derived fiber direction corresponds to the predominant fiber direction. The derived fiber inclination turned out to be not only influenced by myelin density but also systematically overestimated due to signal attenuation. Similar observations were made for synthetic models of optic chiasms of a human and a hooded seal which were opposed to experimental 3D-PLI data sets obtained from the chiasms of both species. Our study showed that SimPLI is a powerful method able to test hypotheses on the underlying fiber structure of brain tissue and, therefore, to improve the

  15. Electro-optical measurements of 3D-stc detectors fabricated at ITC-irst

    Energy Technology Data Exchange (ETDEWEB)

    Zoboli, Andrea [INFN and Department of ICT, University of Trento, via Sommarive, 14 - 38050 Povo di Trento (Italy)], E-mail: zoboli@dit.unitn.it; Boscardin, Maurizio [ITC-irst, Microsystems Division, via Sommarive, 18 - 38050 Povo di Trento (Italy); Bosisio, Luciano [INFN and Department of Physics, University of Trieste, via A. Valerio, 2 - 34127 Trieste (Italy); Dalla Betta, Gian-Franco [INFN and Department of ICT, University of Trento, via Sommarive, 14 - 38050 Povo di Trento (Italy); Piemonte, Claudio; Pozza, Alberto; Ronchin, Sabina; Zorzi, Nicola [ITC-irst, Microsystems Division, via Sommarive, 18 - 38050 Povo di Trento (Italy)

    2007-12-11

    In the past two years 3D silicon radiation detectors have been developed at ITC-irst (Trento, Italy). As a first step toward full 3D devices, simplified structures featuring columnar electrodes of one doping type only were fabricated. This paper reports the electro-optical characterization of 3D test diodes made with this approach. Experimental results and TCAD simulations provide good insight into the charge collection mechanism and response speed limitation of these structures.

  16. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3-D refractive index maps

    CERN Document Server

    Kim, Kyoohyun

    2016-01-01

    Optical trapping can be used to manipulate the three-dimensional (3-D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3-D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and the extensive computations. Here, we achieved the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3-D refractive index (RI) distribution of samples. Engineering the 3-D light field distribution of a trapping beam based on the measured 3-D RI map of samples generates a light mould, which can be used to manipulate colloidal and biological samples which have arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can ...

  17. Confocal line scanning of a Bessel beam for fast 3D imaging.

    Science.gov (United States)

    Zhang, P; Phipps, M E; Goodwin, P M; Werner, J H

    2014-06-15

    We have developed a light-sheet illumination microscope that can perform fast 3D imaging of transparent biological samples with inexpensive visible lasers and a single galvo mirror (GM). The light-sheet is created by raster scanning a Bessel beam with a GM, with this same GM also being used to rescan the fluorescence across a chip of a camera to construct an image in real time. A slit is used to reject out-of-focus fluorescence such that the image formed in real time has minimal contribution from the sidelobes of the Bessel beam. Compared with two-photon Bessel beam excitation or other confocal line-scanning approaches, our method is of lower cost, is simpler, and does not require calibration and synchronization of multiple GMs. We demonstrated the optical sectioning and out-of-focus background rejection capabilities of this microscope by imaging fluorescently labeled actin filaments in fixed 3T3 cells.

  18. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Science.gov (United States)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  19. Robust Reconstruction and Generalized Dual Hahn Moments Invariants Extraction for 3D Images

    Science.gov (United States)

    Mesbah, Abderrahim; Zouhri, Amal; El Mallahi, Mostafa; Zenkouar, Khalid; Qjidaa, Hassan

    2017-03-01

    In this paper, we introduce a new set of 3D weighed dual Hahn moments which are orthogonal on a non-uniform lattice and their polynomials are numerically stable to scale, consequent, producing a set of weighted orthonormal polynomials. The dual Hahn is the general case of Tchebichef and Krawtchouk, and the orthogonality of dual Hahn moments eliminates the numerical approximations. The computational aspects and symmetry property of 3D weighed dual Hahn moments are discussed in details. To solve their inability to invariability of large 3D images, which cause to overflow issues, a generalized version of these moments noted 3D generalized weighed dual Hahn moment invariants are presented where whose as linear combination of regular geometric moments. For 3D pattern recognition, a generalized expression of 3D weighted dual Hahn moment invariants, under translation, scaling and rotation transformations, have been proposed where a new set of 3D-GWDHMIs have been provided. In experimental studies, the local and global capability of free and noisy 3D image reconstruction of the 3D-WDHMs has been compared with other orthogonal moments such as 3D Tchebichef and 3D Krawtchouk moments using Princeton Shape Benchmark database. On pattern recognition using the 3D-GWDHMIs like 3D object descriptors, the experimental results confirm that the proposed algorithm is more robust than other orthogonal moments for pattern classification of 3D images with and without noise.

  20. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    Science.gov (United States)

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  1. 3D Printing Optical Engine for Controlling Material Microstructure

    Science.gov (United States)

    Huang, Wei-Chin; Chang, Kuang-Po; Wu, Ping-Han; Wu, Chih-Hsien; Lin, Ching-Chih; Chuang, Chuan-Sheng; Lin, De-Yau; Liu, Sung-Ho; Horng, Ji-Bin; Tsau, Fang-Hei

    Controlling the cooling rate of alloy during melting and resolidification is the most commonly used method for varying the material microstructure and consequently the resuling property. However, the cooling rate of a selective laser melting (SLM) production is restricted by a preset optimal parameter of a good dense product. The head room for locally manipulating material property in a process is marginal. In this study, we invent an Optical Engine for locally controlling material microstructure in a SLM process. It develops an invovative method to control and adjust thermal history of the solidification process to gain desired material microstucture and consequently drastically improving the quality. Process parameters selected locally for specific materials requirement according to designed characteristics by using thermal dynamic principles of solidification process. It utilize a technique of complex laser beam shape of adaptive irradiation profile to permit local control of material characteristics as desired. This technology could be useful for industrial application of medical implant, aerospace and automobile industries.

  2. QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ

    CSIR Research Space (South Africa)

    Henriques, R

    2010-05-01

    Full Text Available -1 Nature Methods 7, 339?340 (1 May 2010) | doi:10.1038/nmeth0510-339 QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ Ricardo Henriques , Mickael Lelek , Eugenio F Fornasiero , Flavia Valtorta , Christophe Zimmer & Musa M...

  3. Statistical skull models from 3D X-ray images

    CERN Document Server

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  4. 3D Imaging Technology’s Narrative Appropriation in Cinema

    NARCIS (Netherlands)

    Kiss, Miklós; van den Oever, Annie; Fossati, Giovanna

    2016-01-01

    This chapter traces the cinematic history of stereoscopy by focusing on the contemporary dispute about the values of 3D technology, which are seen as either mere visual attraction or as a technique that perfects the cinematic illusion through increasing perceptual immersion. By taking a neutral stan

  5. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  6. Display of travelling 3D scenes from single integral-imaging capture

    Science.gov (United States)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  7. Rapid Prototyping across the Spectrum: RF to Optical 3D Electromagnetic Structures

    Science.gov (United States)

    2015-11-17

    microbiology, surveillance, energy harvesting , defense technology as well as sensing platforms to name a few [85, 86]. The structure of materials...AFRL-RW-EG-TP-2015-002 Rapid Prototyping across the Spectrum: RF to Optical 3D Electromagnetic Structures Jeffery W. Allen Monica S. Allen Brett...11-17-2015 Interim Report Feb. 2012 – Dec. 2015 4. TITLE AND SUBTITLE Rapid Prototyping across the Spectrum: RF to Optical 3D Electromagnetic

  8. A prototype fan-beam optical CT scanner for 3D dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Warren G.; Rudko, D. A.; Braam, Nicolas A.; Jirasek, Andrew [University of Victoria, Victoria, British Columbia V8P 5C2 (Canada); Wells, Derek M. [British Columbia Cancer Agency, Vancouver Island Centre, Victoria, British Columbia V8R 6V5 (Canada)

    2013-06-15

    Purpose: The objective of this work is to introduce a prototype fan-beam optical computed tomography scanner for three-dimensional (3D) radiation dosimetry. Methods: Two techniques of fan-beam creation were evaluated: a helium-neon laser (HeNe, {lambda} = 543 nm) with line-generating lens, and a laser diode module (LDM, {lambda} = 635 nm) with line-creating head module. Two physical collimator designs were assessed: a single-slot collimator and a multihole collimator. Optimal collimator depth was determined by observing the signal of a single photodiode with varying collimator depths. A method of extending the dynamic range of the system is presented. Two sample types were used for evaluations: nondosimetric absorbent solutions and irradiated polymer gel dosimeters, each housed in 1 liter cylindrical plastic flasks. Imaging protocol investigations were performed to address ring artefacts and image noise. Two image artefact removal techniques were performed in sinogram space. Collimator efficacy was evaluated by imaging highly opaque samples of scatter-based and absorption-based solutions. A noise-based flask registration technique was developed. Two protocols for gel manufacture were examined. Results: The LDM proved advantageous over the HeNe laser due to its reduced noise. Also, the LDM uses a wavelength more suitable for the PRESAGE{sup TM} dosimeter. Collimator depth of 1.5 cm was found to be an optimal balance between scatter rejection, signal strength, and manufacture ease. The multihole collimator is capable of maintaining accurate scatter-rejection to high levels of opacity with scatter-based solutions (T < 0.015%). Imaging protocol investigations support the need for preirradiation and postirradiation scanning to reduce reflection-based ring artefacts and to accommodate flask imperfections and gel inhomogeneities. Artefact removal techniques in sinogram space eliminate streaking artefacts and reduce ring artefacts of up to {approx}40% in magnitude. The

  9. Monopulse radar 3-D imaging and application in terminal guidance radar

    Science.gov (United States)

    Xu, Hui; Qin, Guodong; Zhang, Lina

    2007-11-01

    Monopulse radar 3-D imaging integrates ISAR, monopulse angle measurement and 3-D imaging processing to obtain the 3-D image which can reflect the real size of a target, which means any two of the three measurement parameters, namely azimuth difference beam elevation difference beam and radial range, can be used to form 3-D image of 3-D object. The basic principles of Monopulse radar 3-D imaging are briefly introduced, the effect of target carriage changes(including yaw, pitch, roll and movement of target itself) on 3-D imaging and 3-D moving compensation based on the chirp rate μ and Doppler frequency f d are analyzed, and the application of monopulse radar 3-D imaging to terminal guidance radars is forecasted. The computer simulation results show that monopulse radar 3-D imaging has apparent advantages in distinguishing a target from overside interference and precise assault on vital part of a target, and has great importance in terminal guidance radars.

  10. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  11. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  12. A flexible new method for 3D measurement based on multi-view image sequences

    Science.gov (United States)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  13. Optical security and anti-counterfeiting using 3D screen printing

    Science.gov (United States)

    Wu, W. H.; Yang, W. K.; Cheng, S. H.; Kuo, M. K.; Lee, H. W.; Chang, C. C.; Jeng, G. R.; Liu, C. P.

    2007-04-01

    This work presents a novel method for optical decrypted key production by screen printing technology. The key is mainly used to decrypt encoded information hidden inside documents containing Moire patterns and integral photographic 3D auto-stereoscopic images as a second-line security file. The proposed method can also be applied as an anti-counterfeiting measure in artistic screening. Decryption is performed by matching the correct angle between the decoding key and the document with a text or a simple geometric pattern. This study presents the theoretical analysis and experimental results of the decoded key production by the best parameter combination of Moire pattern size and screen printing elements. Experimental results reveal that the proposed method can be applied in anti-counterfeit document design for the fast and low-cost production of decryption key.

  14. 3D live fluorescence imaging of cellular dynamics using Bessel beam plane illumination microscopy.

    Science.gov (United States)

    Gao, Liang; Shao, Lin; Chen, Bi-Chang; Betzig, Eric

    2014-05-01

    3D live imaging is important for a better understanding of biological processes, but it is challenging with current techniques such as spinning-disk confocal microscopy. Bessel beam plane illumination microscopy allows high-speed 3D live fluorescence imaging of living cellular and multicellular specimens with nearly isotropic spatial resolution, low photobleaching and low photodamage. Unlike conventional fluorescence imaging techniques that usually have a unique operation mode, Bessel plane illumination has several modes that offer different performance with different imaging metrics. To achieve optimal results from this technique, the appropriate operation mode needs to be selected and the experimental setting must be optimized for the specific application and associated sample properties. Here we explain the fundamental working principles of this technique, discuss the pros and cons of each operational mode and show through examples how to optimize experimental parameters. We also describe the procedures needed to construct, align and operate a Bessel beam plane illumination microscope by using our previously reported system as an example, and we list the necessary equipment to build such a microscope. Assuming all components are readily available, it would take a person skilled in optical instrumentation ∼1 month to assemble and operate a microscope according to this protocol.

  15. MultiFocus Polarization Microscope (MF-PolScope) for 3D polarization imaging of up to 25 focal planes simultaneously.

    Science.gov (United States)

    Abrahamsson, Sara; McQuilken, Molly; Mehta, Shalin B; Verma, Amitabh; Larsch, Johannes; Ilic, Rob; Heintzmann, Rainer; Bargmann, Cornelia I; Gladfelter, Amy S; Oldenbourg, Rudolf

    2015-03-23

    We have developed an imaging system for 3D time-lapse polarization microscopy of living biological samples. Polarization imaging reveals the position, alignment and orientation of submicroscopic features in label-free as well as fluorescently labeled specimens. Optical anisotropies are calculated from a series of images where the sample is illuminated by light of different polarization states. Due to the number of images necessary to collect both multiple polarization states and multiple focal planes, 3D polarization imaging is most often prohibitively slow. Our MF-PolScope system employs multifocus optics to form an instantaneous 3D image of up to 25 simultaneous focal-planes. We describe this optical system and show examples of 3D multi-focus polarization imaging of biological samples, including a protein assembly study in budding yeast cells.

  16. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    Science.gov (United States)

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  17. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    Science.gov (United States)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  18. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Science.gov (United States)

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images.

  19. Novel metrics and methodology for the characterisation of 3D imaging systems

    Science.gov (United States)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Lohse, Niels; Jackson, Michael R.

    2017-04-01

    The modelling, benchmarking and selection process for non-contact 3D imaging systems relies on the ability to characterise their performance. Characterisation methods that require optically compliant artefacts such as matt white spheres or planes, fail to reveal the performance limitations of a 3D sensor as would be encountered when measuring a real world object with problematic surface finish. This paper reports a method of evaluating the performance of 3D imaging systems on surfaces of arbitrary isotropic surface finish, position and orientation. The method involves capturing point clouds from a set of samples in a range of surface orientations and distances from the sensor. Point clouds are processed to create a single performance chart per surface finish, which shows both if a point is likely to be recovered, and the expected point noise as a function of surface orientation and distance from the sensor. In this paper, the method is demonstrated by utilising a low cost pan-tilt table and an active stereo 3D camera. Its performance is characterised by the fraction and quality of recovered data points on aluminium isotropic surfaces ranging in roughness average (Ra) from 0.09 to 0.46 μm at angles of up to 55° relative to the sensor over a distances from 400 to 800 mm to the scanner. Results from a matt white surface similar to those used in previous characterisation methods contrast drastically with results from even the dullest aluminium sample tested, demonstrating the need to characterise sensors by their limitations, not just best case performance.

  20. Infrared imaging of the polymer 3D-printing process

    Science.gov (United States)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  1. Reconstruction of 3d Digital Image of Weepingforsythia Pollen

    Science.gov (United States)

    Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina

    Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).

  2. D3D augmented reality imaging system: proof of concept in mammography.

    Science.gov (United States)

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  3. Rapid reconstruction of 3D neuronal morphology from light microscopy images with augmented rayburst sampling.

    Science.gov (United States)

    Ming, Xing; Li, Anan; Wu, Jingpeng; Yan, Cheng; Ding, Wenxiang; Gong, Hui; Zeng, Shaoqun; Liu, Qian

    2013-01-01

    Digital reconstruction of three-dimensional (3D) neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM) challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST) system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis.

  4. Rapid reconstruction of 3D neuronal morphology from light microscopy images with augmented rayburst sampling.

    Directory of Open Access Journals (Sweden)

    Xing Ming

    Full Text Available Digital reconstruction of three-dimensional (3D neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis.

  5. 3-D Target Location from Stereoscopic SAR Images

    Energy Technology Data Exchange (ETDEWEB)

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  6. Multi-layer 3D imaging using a few viewpoint images and depth map

    Science.gov (United States)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  7. System design for 3D wound imaging using low-cost mobile devices

    Science.gov (United States)

    Sirazitdinova, Ekaterina; Deserno, Thomas M.

    2017-03-01

    The state-of-the art method of wound assessment is a manual, imprecise and time-consuming procedure. Per- formed by clinicians, it has limited reproducibility and accuracy, large time consumption and high costs. Novel technologies such as laser scanning microscopy, multi-photon microscopy, optical coherence tomography and hyper-spectral imaging, as well as devices relying on the structured light sensors, make accurate wound assessment possible. However, such methods have limitations due to high costs and may lack portability and availability. In this paper, we present a low-cost wound assessment system and architecture for fast and accurate cutaneous wound assessment using inexpensive consumer smartphone devices. Computer vision techniques are applied either on the device or the server to reconstruct wounds in 3D as dense models, which are generated from images taken with a built-in single camera of a smartphone device. The system architecture includes imaging (smartphone), processing (smartphone or PACS) and storage (PACS) devices. It supports tracking over time by alignment of 3D models, color correction using a reference color card placed into the scene and automatic segmentation of wound regions. Using our system, we are able to detect and document quantitative characteristics of chronic wounds, including size, depth, volume, rate of healing, as well as qualitative characteristics as color, presence of necrosis and type of involved tissue.

  8. Efficient RPG detection in noisy 3D image data

    Science.gov (United States)

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  9. Advanced 3-D Ultrasound Imaging: 3-D Synthetic Aperture Imaging using Fully Addressed and Row-Column Addressed 2-D Transducer Arrays

    DEFF Research Database (Denmark)

    Bouzari, Hamed

    with transducer arrays using this addressing scheme, when integrated into probe handles. For that reason, two in-house prototyped 62+62 row-column addressed 2-D array transducer probes were manufactured using capacitive micromachined ultrasonic transducer (CMUT) and piezoelectric transducer (PZT) technology...... in many clinical applications. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D ultrasound imaging. Two limiting factors have traditionally been the low image quality as well as low volume rate achievable with a 2-D transducer array using the conventional 3-D...... and measurements with the ultrasound research scanner SARUS and a 3.8 MHz 1024 element 2-D transducer array. In all investigations, 3-D synthetic aperture imaging achieved a better resolution, lower side-lobes, higher contrast, and better signal to noise ratio than parallel beamforming. This is achieved partly...

  10. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Yun, G. S., E-mail: gunsu@postech.ac.kr; Choi, M. J.; Lee, J.; Kim, M.; Leem, J.; Nam, Y.; Choe, G. H. [Department of Physics, Pohang University of Science and Technology, Pohang 790-784 (Korea, Republic of); Lee, W.; Park, H. K. [Ulsan National Institute of Science and Technology, Ulsan 689-798 (Korea, Republic of); Park, H.; Woo, D. S.; Kim, K. W. [School of Electrical Engineering, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California, Davis, California 95616 (United States); Ito, N. [KASTEC, Kyushu University, Kasuga-shi, Fukuoka 812-8581 (Japan); Mase, A. [Ube National College of Technology, Ube-shi, Yamaguchi 755-8555 (Japan); Lee, S. G. [National Fusion Research Institute, Daejeon 305-333 (Korea, Republic of)

    2014-11-15

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B{sub 0} = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE.

  11. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    Directory of Open Access Journals (Sweden)

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  12. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    Science.gov (United States)

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  13. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  14. Evaluation of stereoscopic 3D displays for image analysis tasks

    Science.gov (United States)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  15. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation

    Science.gov (United States)

    Calafiore, Giuseppe; Koshelev, Alexander; Allen, Frances I.; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-09-01

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three-dimensional structure achieved by direct nanoimprint lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the good lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enable advancements in areas such as integrated optics and sensing, achieving enhanced portability and versatility of fiber optic components.

  16. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation.

    Science.gov (United States)

    Calafiore, Giuseppe; Koshelev, Alexander; Allen, Frances I; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-09-16

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three-dimensional structure achieved by direct nanoimprint lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the good lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enable advancements in areas such as integrated optics and sensing, achieving enhanced portability and versatility of fiber optic components.

  17. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  18. Review of three-dimensional (3D) surface imaging for oncoplastic, reconstructive and aesthetic breast surgery.

    Science.gov (United States)

    O'Connell, Rachel L; Stevens, Roger J G; Harris, Paul A; Rusby, Jennifer E

    2015-08-01

    Three-dimensional surface imaging (3D-SI) is being marketed as a tool in aesthetic breast surgery. It has recently also been studied in the objective evaluation of cosmetic outcome of oncological procedures. The aim of this review is to summarise the use of 3D-SI in oncoplastic, reconstructive and aesthetic breast surgery. An extensive literature review was undertaken to identify published studies. Two reviewers independently screened all abstracts and selected relevant articles using specific inclusion criteria. Seventy two articles relating to 3D-SI for breast surgery were identified. These covered endpoints such as image acquisition, calculations and data obtainable, comparison of 3D and 2D imaging and clinical research applications of 3D-SI. The literature provides a favourable view of 3D-SI. However, evidence of its superiority over current methods of clinical decision making, surgical planning, communication and evaluation of outcome is required before it can be accepted into mainstream practice.

  19. Analysis of information for cerebrovascular disorders obtained by 3D MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, Kohki [Tokyo Univ. (Japan). Inst. of Medical Science; Yoshioka, Naoki; Watanabe, Fumio; Shiono, Takahiro; Sugishita, Morihiro; Umino, Kazunori

    1995-12-01

    Recently, it becomes easy to analyze information obtained by 3D MR imaging due to remarkable progress of fast MR imaging technique and analysis tool. Six patients suffered from aphasia (4 cerebral infarctions and 2 bleedings) were performed 3D MR imaging (3D FLASH-TR/TE/flip angle; 20-50 msec/6-10 msec/20-30 degrees) and their volume information were analyzed by multiple projection reconstruction (MPR), surface rendering 3D reconstruction, and volume rendering 3D reconstruction using Volume Design PRO (Medical Design Co., Ltd.). Four of them were diagnosed as Broca`s aphasia clinically and their lesions could be detected around the cortices of the left inferior frontal gyrus. Another 2 patients were diagnosed as Wernicke`s aphasia and the lesions could be detected around the cortices of the left supramarginal gyrus. This technique for 3D volume analyses would provide quite exact locational information about cerebral cortical lesions. (author).

  20. Building Extraction from DSM Acquired by Airborne 3D Image

    Institute of Scientific and Technical Information of China (English)

    YOU Hongjian; LI Shukai

    2003-01-01

    Segmentation and edge regulation are studied deeply to extract buildings from DSM data produced in this paper. Building segmentation is the first step to extract buildings, and a new segmentation method-adaptive iterative segmentation considering ratio mean square-is proposed to extract the contour of buildings effectively. A sub-image (such as 50× 50 pixels )of the image is processed in sequence,the average gray level and its ratio mean square are calculated first, then threshold of the sub-image is selected by using iterative threshold segmentation. The current pixel is segmented according to the threshold, the aver-age gray level and the ratio mean square of the sub-image. The edge points of the building are grouped according to the azimuth of neighbor points, and then the optimal azimuth of the points that belong to the same group can be calculated by using line interpolation.

  1. Online reconstruction of 3D magnetic particle imaging data

    Science.gov (United States)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  2. Imaging of discontinuities in nonlinear 3-D seismic inversion

    Energy Technology Data Exchange (ETDEWEB)

    Carrion, P.M.; Cerveny, V. (PPPG/UFBA, Salvador (Brazil))

    1990-09-01

    The authors present a nonlinear approach for reconstruction of discontinuities in geological environment (earth's crust, say). The advantage of the proposed method is that it is not limited to a Born approximation (small angles of propagation and weak scatterers). One can expect significantly better images since larger apertures including wide angle reflection arrivals can be incorporated into the imaging operator. In this paper, they treat only compressional body waves: shear and surface waves are considered as noise.

  3. Design of extended viewing zone at autostereoscopic 3D display based on diffusing optical element

    Science.gov (United States)

    Kim, Min Chang; Hwang, Yong Seok; Hong, Suk-Pyo; Kim, Eun Soo

    2012-03-01

    In this paper, to realize a non-glasses type 3D display as next step from the current glasses-typed 3D display, it is suggested that a viewing zone is designed for the 3D display using DOE (Diffusing Optical Element). Viewing zone of proposed method is larger than that of the current parallax barrier method or lenticular method. Through proposed method, it is shown to enable the expansion and adjustment of the area of viewing zone according to viewing distance.

  4. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  5. Contactless operating table control based on 3D image processing.

    Science.gov (United States)

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  6. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  7. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    Science.gov (United States)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  8. Radar Imaging of Spheres in 3D using MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  9. 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation

    Science.gov (United States)

    Hu, Qi; Duan, Jin; Zhai, Di; Wang, LiNing

    2016-10-01

    With the continuous development of industrialization, 3D printing technology steps into individuals' lives gradually, however, the consequential security issue has become the urgent problem which is imminent. This paper proposes the 3D printing optical watermark algorithms based on the combination of DWT and Fresnel transformation and utilizes authorized key to restrict 3D model printing's permissions. Firstly, algorithms put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform and put the transformed coefficient into Fresnel transformation. Use math model to embed watermark information into it and finally generate 3D digital model with watermarking. This paper adopts VC++.NET and DIRECTX 9.0 SDK for combined developing and testing, and the results show that in fixed affine space, achieve the robustness in translation, revolving and proportion transforms of 3D model and better watermark-invisibility. The security and authorization of 3D model have been protected effectively.

  10. Large core plastic planar optical splitter fabricated by 3D printing technology

    Science.gov (United States)

    Prajzler, Václav; Kulha, Pavel; Knietel, Marian; Enser, Herbert

    2017-10-01

    We report on the design, fabrication and optical properties of large core multimode optical polymer splitter fabricated using fill up core polymer in substrate that was made by 3D printing technology. The splitter was designed by the beam propagation method intended for assembling large core waveguide fibers with 735 μm diameter. Waveguide core layers were made of optically clear liquid adhesive, and Veroclear polymer was used as substrate and cover layers. Measurement of optical losses proved that the insertion optical loss was lower than 6.8 dB in the visible spectrum.

  11. Real-time auto-stereoscopic visualization of 3D medical images

    Science.gov (United States)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  12. Image quality of a cone beam O-arm 3D imaging system

    Science.gov (United States)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  13. Air-touch interaction system for integral imaging 3D display

    Science.gov (United States)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  14. Oscillating optical tweezer-based 3-D confocal microrheometer for investigating the intracellular micromechanics and structures

    Science.gov (United States)

    Ou-Yang, H. D.; Rickter, E. A.; Pu, C.; Latinovic, O.; Kumar, A.; Mengistu, M.; Lowe-Krentz, L.; Chien, S.

    2005-08-01

    Mechanical properties of living biological cells are important for cells to maintain their shapes, support mechanical stresses and move through tissue matrix. The use of optical tweezers to measure micromechanical properties of cells has recently made significant progresses. This paper presents a new approach, the oscillating optical tweezer cytorheometer (OOTC), which takes advantage of the coherent detection of harmonically modulated particle motions by a lock-in amplifier to increase sensitivity, temporal resolution and simplicity. We demonstrate that OOTC can measure the dynamic mechanical modulus in the frequency range of 0.1-6,000 Hz at a rate as fast as 1 data point per second with submicron spatial resolution. More importantly, OOTC is capable of distinguishing the intrinsic non-random temporal variations from random fluctuations due to Brownian motion; this capability, not achievable by conventional approaches, is particular useful because living systems are highly dynamic and often exhibit non-thermal, rhythmic behavior in a broad time scale from a fraction of a second to hours or days. Although OOTC is effective in measuring the intracellular micromechanical properties, unless we can visualize the cytoskeleton in situ, the mechanical property data would only be as informative as that of "Blind men and the Elephant". To solve this problem, we take two steps, the first, to use of fluorescent imaging to identify the granular structures trapped by optical tweezers, and second, to integrate OOTC with 3-D confocal microscopy so we can take simultaneous, in situ measurements of the micromechanics and intracellular structure in living cells. In this paper, we discuss examples of applying the oscillating tweezer-based cytorheometer for investigating cultured bovine endothelial cells, the identification of caveolae as some of the granular structures in the cell as well as our approach to integrate optical tweezers with a spinning disk confocal microscope.

  15. High definition 3D imaging lidar system using CCD

    Science.gov (United States)

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong

    2016-10-01

    In this study we propose and demonstrate a novel technique for measuring distance with high definition three-dimensional imaging. To meet the stringent requirements of various missions, spatial resolution and range precision are important properties for flash LIDAR systems. The proposed LIDAR system employs a polarization modulator and a CCD. When a laser pulse is emitted from the laser, it triggers the polarization modulator. The laser pulse is scattered by the target and is reflected back to the LIDAR system while the polarization modulator is rotating. Its polarization state is a function of time. The laser-return pulse passes through the polarization modulator in a certain polarization state, and the polarization state is calculated using the intensities of the laser pulses measured by the CCD. Because the function of the time and the polarization state is already known, the polarization state can be converted to time-of-flight. By adopting a polarization modulator and a CCD and only measuring the energy of a laser pulse to obtain range, a high resolution three-dimensional image can be acquired by the proposed three-dimensional imaging LIDAR system. Since this system only measures the energy of the laser pulse, a high bandwidth detector and a high resolution TDC are not required for high range precision. The proposed method is expected to be an alternative method for many three-dimensional imaging LIDAR system applications that require high resolution.

  16. Intraoperative 3D Ultrasonography for Image-Guided Neurosurgery

    NARCIS (Netherlands)

    Letteboer, Marloes Maria Johanna

    2004-01-01

    Stereotactic neurosurgery has evolved dramatically in recent years from the original rigid frame-based systems to the current frameless image-guided systems, which allow greater flexibility while maintaining sufficient accuracy. As these systems continue to evolve, more applications are found, and i

  17. Hybrid Method for 3D Segmentation of Magnetic Resonance Images

    Institute of Scientific and Technical Information of China (English)

    ZHANGXiang; ZHANGDazhi; TIANJinwen; LIUJian

    2003-01-01

    Segmentation of some complex images, especially in magnetic resonance brain images, is often difficult to perform satisfactory results using only single approach of image segmentation. An approach towards the integration of several techniques seems to be the best solution. In this paper a new hybrid method for 3-dimension segmentation of the whole brain is introduced, based on fuzzy region growing, edge detection and mathematical morphology, The gray-level threshold, controlling the process of region growing, is determined by fuzzy technique. The image gradient feature is obtained by the 3-dimension sobel operator considering a 3×3×3 data block with the voxel to be evaluated at the center, while the gradient magnitude threshold is defined by the gradient magnitude histogram of brain magnetic resonance volume. By the combined methods of edge detection and region growing, the white matter volume of human brain is segmented perfectly. By the post-processing using mathematical morphological techniques, the whole brain region is obtained. In order to investigate the validity of the hybrid method, two comparative experiments, the region growing method using only gray-level feature and the thresholding method by combining gray-level and gradient features, are carried out. Experimental results indicate that the proposed method provides much better results than the traditional method using a single technique in the 3-dimension segmentation of human brain magnetic resonance data sets.

  18. Registration and 3D visualization of large microscopy images

    Science.gov (United States)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  19. Space Radar Image of Kilauea, Hawaii in 3-D

    Science.gov (United States)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  20. Noninvasive metabolic imaging of engineered 3D human adipose tissue in a perfusion bioreactor.

    Directory of Open Access Journals (Sweden)

    Andrew Ward

    Full Text Available The efficacy and economy of most in vitro human models used in research is limited by the lack of a physiologically-relevant three-dimensional perfused environment and the inability to noninvasively quantify the structural and biochemical characteristics of the tissue. The goal of this project was to develop a perfusion bioreactor system compatible with two-photon imaging to noninvasively assess tissue engineered human adipose tissue structure and function in vitro. Three-dimensional (3D vascularized human adipose tissues were engineered in vitro, before being introduced to a perfusion environment and tracked over time by automated quantification of endogenous markers of metabolism using two-photon excited fluorescence (TPEF. Depth-resolved image stacks were analyzed for redox ratio metabolic profiling and compared to prior analyses performed on 3D engineered adipose tissue in static culture. Traditional assessments with H&E staining were used to qualitatively measure extracellular matrix generation and cell density with respect to location within the tissue. The distribution of cells within the tissue and average cellular redox ratios were different between static and perfusion cultures, while the trends of decreased redox ratio and increased cellular proliferation with time in both static and perfusion cultures were similar. These results establish a basis for noninvasive optical tracking of tissue structure and function in vitro, which can be applied to future studies to assess tissue development or drug toxicity screening and disease progression.

  1. Noninvasive metabolic imaging of engineered 3D human adipose tissue in a perfusion bioreactor.

    Science.gov (United States)

    Ward, Andrew; Quinn, Kyle P; Bellas, Evangelia; Georgakoudi, Irene; Kaplan, David L

    2013-01-01

    The efficacy and economy of most in vitro human models used in research is limited by the lack of a physiologically-relevant three-dimensional perfused environment and the inability to noninvasively quantify the structural and biochemical characteristics of the tissue. The goal of this project was to develop a perfusion bioreactor system compatible with two-photon imaging to noninvasively assess tissue engineered human adipose tissue structure and function in vitro. Three-dimensional (3D) vascularized human adipose tissues were engineered in vitro, before being introduced to a perfusion environment and tracked over time by automated quantification of endogenous markers of metabolism using two-photon excited fluorescence (TPEF). Depth-resolved image stacks were analyzed for redox ratio metabolic profiling and compared to prior analyses performed on 3D engineered adipose tissue in static culture. Traditional assessments with H&E staining were used to qualitatively measure extracellular matrix generation and cell density with respect to location within the tissue. The distribution of cells within the tissue and average cellular redox ratios were different between static and perfusion cultures, while the trends of decreased redox ratio and increased cellular proliferation with time in both static and perfusion cultures were similar. These results establish a basis for noninvasive optical tracking of tissue structure and function in vitro, which can be applied to future studies to assess tissue development or drug toxicity screening and disease progression.

  2. A Conceptual Design For A Spaceborne 3D Imaging Lidar

    Science.gov (United States)

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2002-01-01

    First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.

  3. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    Science.gov (United States)

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.

  4. Design, Simulation and Optimisation of a Fibre-optic 3D Accelerometer

    Science.gov (United States)

    Yang, Zhen; Fang, Xiao-Yong; Zhou, Yan; Li, Ya-lin; Yuan, Jie; Cao, Mao-Sheng

    2013-07-01

    Using an inertia pendulum comprised of two prisms, flexible beams and an elastic flake, we present a novel fibre-optic 3D accelerometer design. The total reverse reflection of the cube-corner prism and the spectroscopic property of an orthogonal holographic grating enable the measurement of the two transverse components of the 3D acceleration simultaneously, while the longitudinal component can be determined from the elastic deformation of the flake. Due to optical interferometry, this sensor may provide a wider range, higher sensitivity and better resolving power than other accelerometers. Moreover, we use finite element analysis to study the performance and to optimise the structural design of the sensor.

  5. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  6. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    Science.gov (United States)

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  7. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Science.gov (United States)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  8. 3D-CT imaging processing for qualitative and quantitative analysis of maxillofacial cysts and tumors

    Energy Technology Data Exchange (ETDEWEB)

    Cavalcanti, Marcelo de Gusmao Paraiso [Sao Paulo Univ., SP (Brazil). Faculdade de Odontologia. Dept. de Radiologia; Antunes, Jose Leopoldo Ferreira [Sao Paulo Univ., SP (Brazil). Faculdade de Odotologia. Dept. de Odontologia Social

    2002-09-01

    The objective of this study was to evaluate spiral-computed tomography (3D-CT) images of 20 patients presenting with cysts and tumors in the maxillofacial complex, in order to compare the surface and volume techniques of image rendering. The qualitative and quantitative appraisal indicated that the volume technique allowed a more precise and accurate observation than the surface method. On the average, the measurements obtained by means of the 3D volume-rendering technique were 6.28% higher than those obtained by means of the surface method. The sensitivity of the 3D surface technique was lower than that of the 3D volume technique for all conditions stipulated in the diagnosis and evaluation of lesions. We concluded that the 3D-CT volume rendering technique was more reproducible and sensitive than the 3D-CT surface method, in the diagnosis, treatment planning and evaluation of maxillofacial lesions, especially those with intra-osseous involvement. (author)

  9. Articular cartilage zonal differentiation via 3D Second-Harmonic Generation imaging microscopy.

    Science.gov (United States)

    Chaudhary, Rajeev; Campbell, Kirby R; Tilbury, Karissa B; Vanderby, Ray; Block, Walter F; Kijowski, Richard; Campagnola, Paul J

    2015-04-01

    The collagen structure throughout the patella has not been thoroughly investigated by 3D imaging, where the majority of the existing data come from histological cross sections. It is important to have a better understanding of the architecture in normal tissues, where this could then be applied to imaging of diseased states. To address this shortcoming, we investigated the combined use of collagen-specific Second-Harmonic Generation (SHG) imaging and measurement of bulk optical properties to characterize collagen fiber orientations of the histologically defined zones of bovine articular cartilage. Forward and backward SHG intensities of sections from superficial, middle and deep zones were collected as a function of depth and analyzed by Monte Carlo simulations to extract the SHG creation direction, which is related to the fibrillar assembly. Our results revealed differences in SHG forward-backward response between the three zones, where these are consistent with a previously developed model of SHG emission. Some of the findings are consistent with that from other modalities; however, SHG analysis showed the middle zone had the most organized fibril assembly. While not distinct, we also report bulk optical property values for these different zones within the patella. Collectively, these results provide quantitative measurements of structural changes at both the fiber and fibril assembly of the different cartilage zones and reveals structural information not possible by other microscope modalities. This can provide quantitative insight to the collagen fiber network in normal cartilage, which may ultimately be developed as a biomarker for osteoarthritis.

  10. Optical Measurement of Micromechanics and Structure in a 3D Fibrin Extracellular Matrix

    Science.gov (United States)

    Kotlarchyk, Maxwell Aaron

    2011-07-01

    In recent years, a significant number of studies have focused on linking substrate mechanics to cell function using standard methodologies to characterize the bulk properties of the hydrogel substrates. However, current understanding of the correlations between the microstructural mechanical properties of hydrogels and cell function in 3D is poor, in part because of a lack of appropriate techniques. Methods for tuning extracellular matrix (ECM) mechanics in 3D cell culture that rely on increasing the concentration of either protein or cross-linking molecules fail to control important parameters such as pore size, ligand density, and molecular diffusivity. Alternatively, ECM stiffness can be modulated independently from protein concentration by mechanically loading the ECM. We have developed an optical tweezers-based microrheology system to investigate the fundamental role of ECM mechanical properties in determining cellular behavior. Further, this thesis outlines the development of a novel device for generating stiffness gradients in naturally derived ECMs, where stiffness is tuned by inducing strain, while local structure and mechanical properties are directly determined by laser tweezers-based passive and active microrheology respectively. Hydrogel substrates polymerized within 35 mm diameter Petri dishes are strained non-uniformly by the precise rotation of an embedded cylindrical post, and exhibit a position-dependent stiffness with little to no modulation of local mesh geometry. Here we present microrheological studies in the context of fibrin hydrogels. Microrheology and confocal imaging were used to directly measure local changes in micromechanics and structure respectively in unstrained hydrogels of increasing fibrinogen concentration, as well as in our strain gradient device, in which the concentration of fibrinogen is held constant. Orbital particle tracking, and raster image correlation analysis are used to quantify changes in fibrin mechanics on the

  11. Extended gray level co-occurrence matrix computation for 3D image volume

    Science.gov (United States)

    Salih, Nurulazirah M.; Dewi, Dyah Ekashanti Octorina

    2017-02-01

    Gray Level Co-occurrence Matrix (GLCM) is one of the main techniques for texture analysis that has been widely used in many applications. Conventional GLCMs usually focus on two-dimensional (2D) image texture analysis only. However, a three-dimensional (3D) image volume requires specific texture analysis computation. In this paper, an extended 2D to 3D GLCM approach based on the concept of multiple 2D plane positions and pixel orientation directions in the 3D environment is proposed. The algorithm was implemented by breaking down the 3D image volume into 2D slices based on five different plane positions (coordinate axes and oblique axes) resulting in 13 independent directions, then calculating the GLCMs. The resulted GLCMs were averaged to obtain normalized values, then the 3D texture features were calculated. A preliminary examination was performed on a 3D image volume (64 x 64 x 64 voxels). Our analysis confirmed that the proposed technique is capable of extracting the 3D texture features from the extended GLCMs approach. It is a simple and comprehensive technique that can contribute to the 3D image analysis.

  12. Synthesis of image sequences for Korean sign language using 3D shape model

    Science.gov (United States)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  13. Internal Strain Measurement in 3D Braided Composites Using Co-braided Optical Fiber Sensors

    Institute of Scientific and Technical Information of China (English)

    Shenfang YUAN; Rui HUANG; Yunjiang RAO

    2004-01-01

    3D braided composite technology has stimulated a great deal of interest in the world at large. But due to the threedimensional nature of these kinds of composites, coupled with the shortcomings of currently-adopted experimental test methods, it is difficult to measure the internal parameters of this materials, hence causes it difficult to understand the material performance. A new method is introduced herein to measure the internal strain of braided composite materials using co-braided fiber optic sensors. Two kinds of fiber optic sensors are co-braided into 3D braided composites to measure internal strain. One of these is the Fabry-Parrot (F-P) fiber optic sensor; the other is the polarimetric fiber optic sensor. Experiments are conducted to measure internal strain under tension, bending and thermal environments in the 3D carbon fiber braided composite specimens, both locally and globally. Experimental results show that multiple fiber optic sensors can be braided into the 3D braided composites to measure the internal parameters, providing a more accurate measurement method and leading to a better understanding of these materials.

  14. Close-range optical measurement of aircraft's 3D attitude and accuracy evaluation

    Institute of Scientific and Technical Information of China (English)

    Zhe Li; Zhenliang Ding; Feng Yuan

    2008-01-01

    A new screen-spot imaging method based on optical measurement is proposed, which is applicable to the close-range measurement of aircraft's three-dimensional (3D) attitude parameters. Laser tracker is used to finish the global calibrations of the high-speed cameras and the fixed screens on test site, as well as to establish media-coordinate-frames among various coordinate systems. The laser cooperation object mounted on the aircraft surface projects laser beams on the screens and the high-speed cameras syn-chronously record the light-spots' position changing with aircraft attitude. The recorded image sequences are used to compute the aircraft attitude parameters. Based on the matrix analysis, the error sources of the measurement accuracy are analyzed, and the maximum relative error of this mathematical model is estimated. The experimental result shows that this method effectively makes the change of aircraft position distinguishable, and the error of this method is no more than 3' while the rotation angles of three axes are within a certain range.

  15. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.

    Science.gov (United States)

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-09-19

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  16. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    Directory of Open Access Journals (Sweden)

    Yingzhi Kan

    2016-09-01

    Full Text Available In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D imaging is proposed that uses a two-dimensional (2-D plane antenna array. First, a two-dimensional fast Fourier transform (FFT is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT combined with 2-D inverse FFT (IFFT is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  17. Clinical Application of 3D-FIESTA Image in Patients with Unilateral Inner Ear Symptom.

    Science.gov (United States)

    Oh, Jae Ho; Chung, Jae Ho; Min, Hyun Jung; Cho, Seok Hyun; Park, Chul Won; Lee, Seung Hwan

    2013-12-01

    Unilateral auditory dysfunction such as tinnitus and hearing loss could be a warning sign of a retrocochlear lesion. Auditory brainstem response (ABR) and internal auditory canal magnetic resonance image (MRI) are suggested as novel diagnostic tools for retrocochlear lesions. However, the high cost of MRI and the low sensitivity of the ABR test could be an obstacle when assessing patients with unilateral ear symptoms. The purpose of this study was to introduce the clinical usefulness of three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) MRI in patients with unilateral ear symptoms. Two hundred and fifty-three patients with unilateral tinnitus or unilateral hearing loss who underwent 3D-FIESTA temporal bone MRI as a screening test were enrolled. We reviewed the abnormal findings in the 3D-FIESTA images and ear symptoms using the medical records. In patients with unilateral ear symptoms, 51.0% of the patients had tinnitus and 32.8% patients were assessed to have sudden sensory neural hearing loss. With 3D-FIESTA imaging, twelve patients were diagnosed with acoustic neuroma, four with enlarged vestibular aqueduct syndrome, and two with posterior inferior cerebellar artery aneurysm. Inner ear anomalies and vestibulocochlear nerve aplasia could be diagnosed with 3D-FIESTA imaging. 3D-FIESTA imaging is a highly sensitive method for the diagnosis of cochlear or retrocochlear lesions. 3D-FIESTA imaging is a useful screening tool for patients with unilateral ear symptoms.

  18. Characterization of 3D printing output using an optical sensing system

    Science.gov (United States)

    Straub, Jeremy

    2015-05-01

    This paper presents the experimental design and initial testing of a system to characterize the progress and performance of a 3D printer. The system is based on five Raspberry Pi single-board computers. It collects images of the 3D printed object, which are compared to an ideal model. The system, while suitable for printers of all sizes, can potentially be produced at a sufficiently low cost to allow its incorporation into consumer-grade printers. The efficacy and accuracy of this system is presented and discussed. The paper concludes with a discussion of the benefits of being able to characterize 3D printer performance.

  19. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    CERN Document Server

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  20. Anesthesiology training using 3D imaging and virtual reality

    Science.gov (United States)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  1. Nanoimprint of a 3D structure on an optical fiber for light wavefront manipulation

    CERN Document Server

    Calafiore, Giuseppe; Allen, Frances I; Dhuey, Scott; Sassolini, Simone; Wong, Edward; Lum, Paul; Munechika, Keiko; Cabrini, Stefano

    2016-01-01

    Integration of complex photonic structures onto optical fiber facets enables powerful platforms with unprecedented optical functionalities. Conventional nanofabrication technologies, however, do not permit viable integration of complex photonic devices onto optical fibers owing to their low throughput and high cost. In this paper we report the fabrication of a three dimensional structure achieved by direct Nanoimprint Lithography on the facet of an optical fiber. Nanoimprint processes and tools were specifically developed to enable a high lithographic accuracy and coaxial alignment of the optical device with respect to the fiber core. To demonstrate the capability of this new approach, a 3D beam splitter has been designed, imprinted and optically characterized. Scanning electron microscopy and optical measurements confirmed the excellent lithographic capabilities of the proposed approach as well as the desired optical performance of the imprinted structure. The inexpensive solution presented here should enabl...

  2. How accurate are the fusion of cone-beam CT and 3-D stereophotographic images?

    Directory of Open Access Journals (Sweden)

    Yasas S N Jayaratne

    Full Text Available BACKGROUND: Cone-beam Computed Tomography (CBCT and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1 to evaluate the feasibility of integrating 3-D Photos and CBCT images 2 to assess degree of error that may occur during the above processes and 3 to identify facial regions that would be most appropriate for 3-D image registration. METHODOLOGY: CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS error. PRINCIPAL FINDINGS: The signed average and RMS of the distance differences between the registered surfaces were -0.018 (±0.129 mm and 0.739 (±0.239 mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. CONCLUSIONS: CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning.

  3. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  4. Segmentation of the ovine lung in 3D CT Images

    Science.gov (United States)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  5. Monocular accommodation condition in 3D display types through geometrical optics

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  6. Evaluation of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope

    Science.gov (United States)

    Yoshimoto, Kayo; Watabe, Kenji; Fujinaga, Tetsuji; Iijima, Hideki; Tsujii, Masahiko; Takahashi, Hideya; Takehara, Tetsuo; Yamada, Kenji

    2017-02-01

    Because the view angle of the endoscope is narrow, it is difficult to get the whole image of the digestive tract at once. If there are more than two lesions in the digestive tract, it is hard to understand the 3D positional relationship among the lesions. Virtual endoscopy using CT is a present standard method to get the whole view of the digestive tract. Because the virtual endoscopy is designed to detect the irregularity of the surface, it cannot detect lesions that lack irregularity including early cancer. In this study, we propose a method of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope. The method is as follows: 1) capture sequential images of the digestive tract by moving the endoscope, 2) reconstruct 3D surface pattern for each frame by stereo images, 3) estimate the position of the endoscope by image analysis, 4) reconstitute the entire image of the digestive tract by combining the 3D surface pattern. To confirm the validity of this method, we experimented with a straight tube inside of which circles were allocated at equal distance of 20 mm. We captured sequential images and the reconstituted image of the tube revealed that the distance between each circle was 20.2 +/- 0.3 mm (n=7). The results suggest that this method of endoscopic entire 3D image acquisition may help us understand 3D positional relationship among the lesions such as early esophageal cancer that cannot be detected by virtual endoscopy using CT.

  7. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    DEFF Research Database (Denmark)

    2013-01-01

    An ultrasound imaging system (300) includes a transducer array (302) with a two- dimensional array of transducer elements configured to transmit an ultrasound signal and receive echoes, transmit circuitry (304) configured to control the transducer array to transmit the ultrasound signal so...... as to traverse a field of view, and receive circuitry (306) configured to receive a two dimensional set of echoes produced in response to the ultrasound signal traversing structure in the field of view, wherein the structure includes flowing structures such as flowing blood cells, organ cells etc. A beamformer...... (312) configured to beamform the echoes, and a velocity processor (314) configured to separately determine a depth velocity component, a transverse velocity component and an elevation velocity component, wherein the velocity components are determined based on the same transmitted ultrasound signal...

  8. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography.

    Science.gov (United States)

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-08-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented.

  9. Comparison of 3D Synthetic Aperture Imaging and Explososcan using Phantom Measurements

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Férin, Guillaume; Dufait, Rémi

    2012-01-01

    In this paper, initial 3D ultrasound measurements from a 1024 channel system are presented. Measurements of 3D Synthetic aperture imaging (SAI) and Explososcan are presented and compared. Explososcan is the ’gold standard’ for real-time 3D medical ultrasound imaging. SAI is compared to Explososcan...... by using tissue and wire phantom measurements. The measurements are carried out using a 1024 element 2D transducer and the 1024 channel experimental ultrasound scanner SARUS. To make a fair comparison, the two imaging techniques use the same number of active channels, the same number of emissions per frame...

  10. Combining Street View and Aerial Images to Create Photo-Realistic 3D City Models

    OpenAIRE

    Ivarsson, Caroline

    2014-01-01

    This thesis evaluates two different approaches of using panoramic street view images for creating more photo-realistic 3D city models comparing to 3D city models based on only aerial images. The thesis work has been carried out at Blom Sweden AB with the use of their software and data. The main purpose of this thesis work has been to investigate if street view images can aid in creating more photo-realistic 3D city models on street level through an automatic or semi-automatic approach. Two di...

  11. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    OpenAIRE

    N. Soontranon; Srestasathiern, P.; Lawawirojwong, S.

    2015-01-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around $1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architec...

  12. Simulating receptive fields of human visual cortex for 3D image quality prediction.

    Science.gov (United States)

    Shao, Feng; Chen, Wanting; Lin, Wenchong; Jiang, Qiuping; Jiang, Gangyi

    2016-07-20

    Quality assessment of 3D images presents many challenges when attempting to gain better understanding of the human visual system. In this paper, we propose a new 3D image quality prediction approach by simulating receptive fields (RFs) of human visual cortex. To be more specific, we extract the RFs from a complete visual pathway, and calculate their similarity indices between the reference and distorted 3D images. The final quality score is obtained by determining their connections via support vector regression. Experimental results on three 3D image quality assessment databases demonstrate that in comparison with the most relevant existing methods, the devised algorithm achieves high consistency alignment with subjective assessment, especially for asymmetrically distorted stereoscopic images.

  13. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael. H. F.; Veldhuizen, Albert G.; van Ooijen, Peter. M. A.; Lubbers, Jaap; Burgerhof, Johannes G. M.; Sardjono, Tri A.; Verkerke, Gijbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  14. Particle image velocimetry on simulated 3D ultrafast ultrasound from pediatric matrix TEE transducers

    Science.gov (United States)

    Voorneveld, J. D.; Bera, D.; van der Steen, A. F. W.; de Jong, N.; Bosch, J. G.

    2017-03-01

    Ultrafast 3D transesophageal echocardiographic (TEE) imaging, combined with 3D echo particle image velocimetry (ePIV), would be ideal for tracking the complex blood flow patterns in the heart. We are developing a miniature pediatric matrix TEE transducer that employs micro-beamforming (μBF) and allows high framerate in 3D. In this paper, we assess the feasibility of 3D ePIV with a high frame rate, small aperture transducer and the influence of the micro-beamforming technique. We compare the results of 3D ePIV on simulated images using the μBF transducer and an idealized, fully sampled (FS) matrix transducer. For the two transducers, we have simulated high-framerate imaging of an 8.4mm diameter artery having a known 4D velocity field. The simulations were performed in FieldII. 1000 3D volumes, at a rate of 1000 volumes/sec, were created using a single diverging transmission per volume. The error in the 3D velocity estimation was measured by comparing the ePIV results of both transducers to the ground truth. The results on the simulated volumes show that ePIV can estimate the 4D velocity field of the arterial phantom using these small-aperture transducers suitable for pediatric 3D TEE. The μBF transducer (RMSE 44.0%) achieved comparable ePIV accuracy to that of the FS transducer (RMSE 42.6%).

  15. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  16. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael. H. F.; Veldhuizen, Albert G.; van Ooijen, Peter. M. A.; Lubbers, Jaap; Burgerhof, Johannes G. M.; Sardjono, Tri A.; Verkerke, Gijbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  17. Determining optimum red filter slide distance on creating 3D electron microscope images using anaglyph method

    Science.gov (United States)

    Tresna, W. P.; Isnaeni

    2017-04-01

    Scanning Electron Microscope (SEM) is a proven instrument for analyzing material in which a 2D image of an object is produced. However, the optimization of a 3D image in the SEM system is usually difficult and costly. There is a simple method to produce a 3D image by using two light sources with a red and a blue filter combined in a certain angle. In this experiment, the authors conducted a simulation of the 3D image formation using anaglyph method by finding the optimum point of shifting the red and blue filters in an SEM image. The method used in this experiment was an image processing that employed a digital manipulation on a certain deviation distance of the central point of the main object. The simulation result of an SEM image with a magnification of 5000 times showed an optimal 3D effect that was achieved when the red filter was shifted by 1 μm to the right and the blue filter was shifted by 1 µm to the left from the central position. The result of this simulation can be used to understand better the viewing angle and the optimal position of the two light sources, i.e. red and blue filter pairs. The produced 3D image can be clearly seen using 3D glasses.

  18. Tomographic imaging of reacting flows in 3D by laser absorption spectroscopy

    Science.gov (United States)

    Foo, J.; Martin, P. A.

    2017-05-01

    This paper describes the development of an infrared laser absorption tomography system for the 3D volumetric imaging of chemical species and temperature in reacting flows. The system is based on high-resolution near-infrared tunable diode laser absorption spectroscopy (TDLAS) for the measurement of water vapour above twin, mixed fuel gas burners arranged with an asymmetrical output. Four parallel laser beams pass through the sample region and are rotated rapidly in a plane to produce a wide range of projection angles. A rotation of 180° with 0.5° sampling was achieved in 3.6 s. The effects of changes to the burner fuel flow were monitored in real time for the 2D distributions. The monitoring plane was then moved vertically relative to the burners enabling a stack of 2D images to be produced which were then interpolated to form a 3D volumetric image of the temperature and water concentrations above the burners. The optical transmission of each beam was rapidly scanned around 1392 nm and the spectrum was fitted to find the integrated absorbance of the water transitions and although several are probed in each scan, two of these transitions possess opposite temperature dependencies. The projections of the integrated absorbances at each angle form the sinogram from which the 2D image of integrated absorbance of each line can be reconstructed by the direct Fourier reconstruction based on the Fourier slice theorem. The ratio of the integrated absorbances of the two lines can then be related to temperature alone in a method termed, two-line thermometry. The 2D temperature distribution obtained was validated for pattern and accuracy by thermocouple measurements. With the reconstructed temperature distribution, the temperature-dependent line strengths could be determined and subsequently the concentration distribution of water across the 2D plane whilst variations in burner condition were carried out. These results show that the measurement system based on TDLAS can be

  19. Fiber Optic 3-D Space Piezoelectric Accelerometer and its Antinoise Technology

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The mechanical structure of piezoelectric accelerometer is designed, and the operation equations on X-, Y-, and Z-axes are deduced. The test results of 3-D frequency response are given. Noise disturbances are effectively eliminated by using fiber optic transmission and synchronous detection.

  20. 3-D printed sensing patches with embedded polymer optical fibre Bragg gratings

    DEFF Research Database (Denmark)

    Zubel, Michal G.; Sugden, Kate; Saez-Rodriguez, D.

    2016-01-01

    The first demonstration of a polymer optical fibre Bragg grating (POFBG) embedded in a 3-D printed structure is reported. Its cyclic strain performance and temperature characteristics are examined and discussed. The sensing patch has a repeatable strain sensitivity of 0.38 pm/mu epsilon. Its...

  1. Rewritable 3D bit optical data storage in a PMMA-based photorefractive polymer

    Energy Technology Data Exchange (ETDEWEB)

    Day, D.; Gu, M. [Swinburne Univ. of Tech., Hawthorn, Vic. (Australia). Centre for Micro-Photonics; Smallridge, A. [Victoria Univ., Melbourne (Australia). School of Life Sciences and Technology

    2001-07-04

    A cheap, compact, and rewritable high-density optical data storage system for CD and DVD applications is presented by the authors. Continuous-wave illumination under two-photon excitation in a new poly(methylmethacrylate) (PMMA) based photorefractive polymer allows 3D bit storage of sub-Tbyte data. (orig.)

  2. GEOMETRIC OPTICS FOR 3D-HARTREE-TYPE EQUATION WITH COULOMB POTENTIAL

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This article considers a family of 3D-Hartree-type equation with Coulomb potential |x|-1, whose initial data oscillates so that a caustic appears. In the linear geometric optics case, by using the Lagrangian integrals, a uniform description of the solution outside the caustic, and near the caustic are obtained.

  3. Coordinates calibration in precision detection of 3D optical deformation measurement system

    Science.gov (United States)

    Lu, Honggang; Hu, Chunsheng; Wang, Xingshu; Gao, Yang; Wu, Wei

    2012-11-01

    In order to validate the detection precision of a three Dimensions Optical Deformation Measure System (3D-OMS), a calibration method of auxiliary coordinate and the optical coordinate base on theodolites has been proposed. The installation method by using theodolites to calibrate the auxiliary coordinate and the optical coordinate has been proposed. Specifically, after the auxiliary mirrors installed, the installation accuracy is detected, then we analyzed the influence of Axis-Error of theodolite under the practical condition of our experiment. Furthermore, the influence of validation precision for the 3D-OMS caused by the misalignment of auxiliary coordinate and optical coordinate is analyzed. According to our theoretical analysis and experiments results, the validation precision of the 3D-OMS can achieve an accuracy of 1″ at the conditions of the coordinate alignment accuracy is no more than 10' and the measuring range of 3D-OMS within +/-3'. Therefore, the proposed method can meet our high accuracy requirement while not sensitive to the installation error of auxiliary mirrors. This method is also available for other similar work.

  4. 3-D printed sensing patches with embedded polymer optical fibre Bragg gratings

    DEFF Research Database (Denmark)

    Zubel, Michal G.; Sugden, Kate; Saez-Rodriguez, D.;

    2016-01-01

    The first demonstration of a polymer optical fibre Bragg grating (POFBG) embedded in a 3-D printed structure is reported. Its cyclic strain performance and temperature characteristics are examined and discussed. The sensing patch has a repeatable strain sensitivity of 0.38 pm/mu epsilon. Its temp...

  5. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    Science.gov (United States)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  6. 3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph.

    Science.gov (United States)

    Xu, Wei-Hai; Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie; Wu, Jian-Huang

    2014-08-01

    Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to 'print' the segments of intracranial arteries based on magnetic resonance imaging. Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 µm. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Seven responders marked "grade 1" to 3D printing results, while one marked "grade 4". Therefore, 87.5% of the clinicians considered the 3D printing were successful. Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice.

  7. D3D augmented reality imaging system: proof of concept in mammography

    Directory of Open Access Journals (Sweden)

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  8. Resolution doubling in 3D-STORM imaging through improved buffers.

    Directory of Open Access Journals (Sweden)

    Nicolas Olivier

    Full Text Available Super-resolution imaging methods have revolutionized fluorescence microscopy by revealing the nanoscale organization of labeled proteins. In particular, single-molecule methods such as Stochastic Optical Reconstruction Microscopy (STORM provide resolutions down to a few tens of nanometers by exploiting the cycling of dyes between fluorescent and non-fluorescent states to obtain a sparse population of emitters and precisely localizing them individually. This cycling of dyes is commonly induced by adding different chemicals, which are combined to create a STORM buffer. Despite their importance, the composition of these buffers has scarcely evolved since they were first introduced, fundamentally limiting what can be resolved with STORM. By identifying a new chemical suitable for STORM and optimizing the buffer composition for Alexa-647, we significantly increased the number of photons emitted per cycle by each dye, providing a simple means to enhance the resolution of STORM independently of the optical setup used. Using this buffer to perform 3D-STORM on biological samples, we obtained images with better than 10 nanometer lateral and 30 nanometer axial resolution.

  9. A high-level 3D visualization API for Java and ImageJ

    Directory of Open Access Journals (Sweden)

    Longair Mark

    2010-05-01

    Full Text Available Abstract Background Current imaging methods such as Magnetic Resonance Imaging (MRI, Confocal microscopy, Electron Microscopy (EM or Selective Plane Illumination Microscopy (SPIM yield three-dimensional (3D data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  10. Software for browsing sectioned images of a dog body and generating a 3D model.

    Science.gov (United States)

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  11. The role of extra-foveal processing in 3D imaging

    Science.gov (United States)

    Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.

    2017-03-01

    The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).

  12. An adaptive 3-D discrete cosine transform coder for medical image compression.

    Science.gov (United States)

    Tai, S C; Wu, Y G; Lin, C W

    2000-09-01

    In this communication, a new three-dimensional (3-D) discrete cosine transform (DCT) coder for medical images is presented. In the proposed method, a segmentation technique based on the local energy magnitude is used to segment subblocks of the image into different energy levels. Then, those subblocks with the same energy level are gathered to form a 3-D cuboid. Finally, 3-D DCT is employed to compress the 3-D cuboid individually. Simulation results show that the reconstructed images achieve a bit rate lower than 0.25 bit per pixel even when the compression ratios are higher than 35. As compared with the results by JPEG and other strategies, it is found that the proposed method achieves better qualities of decoded images than by JPEG and the other strategies.

  13. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    Science.gov (United States)

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  14. A Compact, Wide Area Surveillance 3D Imaging LIDAR Providing UAS Sense and Avoid Capabilities Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Eye safe 3D Imaging LIDARS when combined with advanced very high sensitivity, large format receivers can provide a robust wide area search capability in a very...

  15. 3D-DXA: Assessing the Femoral Shape, the Trabecular Macrostructure and the Cortex in 3D from DXA images.

    Science.gov (United States)

    Humbert, Ludovic; Martelli, Yves; Fonolla, Roger; Steghofer, Martin;