WorldWideScience

Sample records for 3d image guided

  1. 3D Image-Guided Automatic Pipette Positioning for Single Cell Experiments in vivo

    Brian Long; Lu Li; Ulf Knoblich; Hongkui Zeng; Hanchuan Peng

    2015-01-01

    We report a method to facilitate single cell, image-guided experiments including in vivo electrophysiology and electroporation. Our method combines 3D image data acquisition, visualization and on-line image analysis with precise control of physical probes such as electrophysiology microelectrodes in brain tissue in vivo. Adaptive pipette positioning provides a platform for future advances in automated, single cell in vivo experiments.

  2. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). (paper)

  3. Hands-on guide for 3D image creation for geological purposes

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    -cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  4. 3D-image-guided high-dose-rate intracavitary brachytherapy for salvage treatment of locally persistent nasopharyngeal carcinoma

    Ren, Yu-Feng; Cao, Xin-Ping; Xu, Jia; Ye, Wei-Jun; Gao, Yuan-Hong; Teh, Bin S.; Wen, Bi-Xiu

    2013-01-01

    Background To evaluate the therapeutic benefit of 3D-image-guided high-dose-rate intracavitary brachytherapy (3D-image-guided HDR-BT) used as a salvage treatment of intensity modulated radiation therapy (IMRT) in patients with locally persistent nasopharyngeal carcinoma (NPC). Methods Thirty-two patients with locally persistent NPC after full dose of IMRT were evaluated retrospectively. 3D-image-guided HDR-BT treatment plan was performed on a 3D treatment planning system (PLATO BPS 14.2). The...

  5. A small animal image guided irradiation system study using 3D dosimeters

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  6. A small animal image guided irradiation system study using 3D dosimeters

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies

  7. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  8. 3D-image-guided high-dose-rate intracavitary brachytherapy for salvage treatment of locally persistent nasopharyngeal carcinoma

    To evaluate the therapeutic benefit of 3D-image-guided high-dose-rate intracavitary brachytherapy (3D-image-guided HDR-BT) used as a salvage treatment of intensity modulated radiation therapy (IMRT) in patients with locally persistent nasopharyngeal carcinoma (NPC). Thirty-two patients with locally persistent NPC after full dose of IMRT were evaluated retrospectively. 3D-image-guided HDR-BT treatment plan was performed on a 3D treatment planning system (PLATO BPS 14.2). The median dose of 16 Gy was delivered to the 100% isodose line of the Gross Tumor Volume. The whole procedure was well tolerated under local anesthesia. The actuarial 5-y local control rate for 3D-image-guided HDR-BT was 93.8%, patients with early-T stage at initial diagnosis had 100% local control rate. The 5-y actuarial progression-free survival and distant metastasis-free survival rate were 78.1%, 87.5%. One patient developed and died of lung metastases. The 5-y actuarial overall survival rate was 96.9%. Our results showed that 3D-image-guided HDR-BT would provide excellent local control as a salvage therapeutic modality to IMRT for patients with locally persistent disease at initial diagnosis of early-T stage NPC

  9. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  10. Optimizing nonrigid registration performance between volumetric true 3D ultrasound images in image-guided neurosurgery

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2011-03-01

    Compensating for brain shift as surgery progresses is important to ensure sufficient accuracy in patient-to-image registration in the operating room (OR) for reliable neuronavigation. Ultrasound has emerged as an important and practical imaging technique for brain shift compensation either by itself or through computational modeling that estimates whole-brain deformation. Using volumetric true 3D ultrasound (3DUS), it is possible to nonrigidly (e.g., based on B-splines) register two temporally different 3DUS images directly to generate feature displacement maps for data assimilation in the biomechanical model. Because of a large amount of data and number of degrees-of-freedom (DOFs) involved, however, a significant computational cost may be required that can adversely influence the clinical feasibility of the technique for efficiently generating model-updated MR (uMR) in the OR. This paper parametrically investigates three B-splines registration parameters and their influence on the computational cost and registration accuracy: number of grid nodes along each direction, floating image volume down-sampling rate, and number of iterations. A simulated rigid body displacement field was employed as a ground-truth against which the accuracy of displacements generated from the B-splines nonrigid registration was compared. A set of optimal parameters was then determined empirically that result in a registration computational cost of less than 1 min and a sub-millimetric accuracy in displacement measurement. These resulting parameters were further applied to a clinical surgery case to demonstrate their practical use. Our results indicate that the optimal set of parameters result in sufficient accuracy and computational efficiency in model computation, which is important for future application of the overall biomechanical modeling to generate uMR for image-guidance in the OR.

  11. A simulation technique for 3D MR-guided acoustic radiation force imaging

    Payne, Allison, E-mail: apayne@ucair.med.utah.edu [Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, Utah 84112 (United States); Bever, Josh de [Department of Computer Science, University of Utah, Salt Lake City, Utah 84112 (United States); Farrer, Alexis [Department of Bioengineering, University of Utah, Salt Lake City, Utah 84112 (United States); Coats, Brittany [Department of Mechanical Engineering, University of Utah, Salt Lake City, Utah 84112 (United States); Parker, Dennis L. [Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, Utah 84108 (United States); Christensen, Douglas A. [Department of Bioengineering, University of Utah, Salt Lake City, Utah 84112 and Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, Utah 84112 (United States)

    2015-02-15

    Purpose: In magnetic resonance-guided focused ultrasound (MRgFUS) therapies, the in situ characterization of the focal spot location and quality is critical. MR acoustic radiation force imaging (MR-ARFI) is a technique that measures the tissue displacement caused by the radiation force exerted by the ultrasound beam. This work presents a new technique to model the displacements caused by the radiation force of an ultrasound beam in a homogeneous tissue model. Methods: When a steady-state point-source force acts internally in an infinite homogeneous medium, the displacement of the material in all directions is given by the Somigliana elastostatic tensor. The radiation force field, which is caused by absorption and reflection of the incident ultrasound intensity pattern, will be spatially distributed, and the tensor formulation takes the form of a convolution of a 3D Green’s function with the force field. The dynamic accumulation of MR phase during the ultrasound pulse can be theoretically accounted for through a time-of-arrival weighting of the Green’s function. This theoretical model was evaluated experimentally in gelatin phantoms of varied stiffness (125-, 175-, and 250-bloom). The acoustic and mechanical properties of the phantoms used as parameters of the model were measured using independent techniques. Displacements at focal depths of 30- and 45-mm in the phantoms were measured by a 3D spin echo MR-ARFI segmented-EPI sequence. Results: The simulated displacements agreed with the MR-ARFI measured displacements for all bloom values and focal depths with a normalized RMS difference of 0.055 (range 0.028–0.12). The displacement magnitude decreased and the displacement pattern broadened with increased bloom value for both focal depths, as predicted by the theory. Conclusions: A new technique that models the displacements caused by the radiation force of an ultrasound beam in a homogeneous tissue model theory has been rigorously validated through comparison

  12. A novel 3D volumetric voxel registration technique for volume-view-guided image registration of multiple imaging modalities

    Purpose: To provide more clinically useful image registration with improved accuracy and reduced time, a novel technique of three-dimensional (3D) volumetric voxel registration of multimodality images is developed. Methods and Materials: This technique can register up to four concurrent images from multimodalities with volume view guidance. Various visualization effects can be applied, facilitating global and internal voxel registration. Fourteen computed tomography/magnetic resonance (CT/MR) image sets and two computed tomography/positron emission tomography (CT/PET) image sets are used. For comparison, an automatic registration technique using maximization of mutual information (MMI) and a three-orthogonal-planar (3P) registration technique are used. Results: Visually sensitive registration criteria for CT/MR and CT/PET have been established, including the homogeneity of color distribution. Based on the registration results of 14 CT/MR images, the 3D voxel technique is in excellent agreement with the automatic MMI technique and is indicatory of a global positioning error (defined as the means and standard deviations of the error distribution) using the 3P pixel technique: 1.8 deg ± 1.2 deg in rotation and 2.0 ± 1.3 (voxel unit) in translation. To the best of our knowledge, this is the first time that such positioning error has been addressed. Conclusion: This novel 3D voxel technique establishes volume-view-guided image registration of up to four modalities. It improves registration accuracy with reduced time, compared with the 3P pixel technique. This article suggests that any interactive and automatic registration should be safeguarded using the 3D voxel technique

  13. Technical Note: Rapid prototyping of 3D grid arrays for image guided therapy quality assurance

    Kittle, David; Holshouser, Barbara; Slater, James M.; Guenther, Bob D.; Pitsianis, Nikos P.; Pearlstein, Robert D. [Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 (United States); Department of Radiology, Loma Linda University Medical Center, Loma Linda, California 92354 (United States); Department of Radiation Medicine, Loma Linda University, Loma Linda, California 92354 (United States); Department of Physics, Duke University, Durham, North Carolina 27708 (United States); Department of Electrical and Computer Engineering and Department of Computer Science, Duke University, Durham, North Carolina 27708 (United States); Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 and Department of Surgery-Neurosurgery, Duke University and Medical Center, Durham, North Carolina 27710 (United States)

    2008-12-15

    Three dimensional grid phantoms offer a number of advantages for measuring imaging related spatial inaccuracies for image guided surgery and radiotherapy. The authors examined the use of rapid prototyping technology for directly fabricating 3D grid phantoms from CAD drawings. We tested three different fabrication process materials, photopolymer jet with acrylic resin (PJ/AR), selective laser sintering with polyamide (SLS/P), and fused deposition modeling with acrylonitrile butadiene styrene (FDM/ABS). The test objects consisted of rectangular arrays of control points formed by the intersections of posts and struts (2 mm rectangular cross section) and spaced 8 mm apart in the x, y, and z directions. The PJ/AR phantom expanded after immersion in water which resulted in permanent warping of the structure. The surface of the FDM/ABS grid exhibited a regular pattern of depressions and ridges from the extrusion process. SLS/P showed the best combination of build accuracy, surface finish, and stability. Based on these findings, a grid phantom for assessing machine-dependent and frame-induced MR spatial distortions was fabricated to be used for quality assurance in stereotactic neurosurgical and radiotherapy procedures. The spatial uniformity of the SLS/P grid control point array was determined by CT imaging (0.6x0.6x0.625 mm{sup 3} resolution) and found suitable for the application, with over 97.5% of the control points located within 0.3 mm of the position specified in CAD drawing and none of the points off by more than 0.4 mm. Rapid prototyping is a flexible and cost effective alternative for development of customized grid phantoms for medical physics quality assurance.

  14. A fast, accurate, and automatic 2D-3D image registration for image-guided cranial radiosurgery

    The authors developed a fast and accurate two-dimensional (2D)-three-dimensional (3D) image registration method to perform precise initial patient setup and frequent detection and correction for patient movement during image-guided cranial radiosurgery treatment. In this method, an approximate geometric relationship is first established to decompose a 3D rigid transformation in the 3D patient coordinate into in-plane transformations and out-of-plane rotations in two orthogonal 2D projections. Digitally reconstructed radiographs are generated offline from a preoperative computed tomography volume prior to treatment and used as the reference for patient position. A multiphase framework is designed to register the digitally reconstructed radiographs with the x-ray images periodically acquired during patient setup and treatment. The registration in each projection is performed independently; the results in the two projections are then combined and converted to a 3D rigid transformation by 2D-3D geometric backprojection. The in-plane transformation and the out-of-plane rotation are estimated using different search methods, including multiresolution matching, steepest descent minimization, and one-dimensional search. Two similarity measures, optimized pattern intensity and sum of squared difference, are applied at different registration phases to optimize accuracy and computation speed. Various experiments on an anthropomorphic head-and-neck phantom showed that, using fiducial registration as a gold standard, the registration errors were 0.33±0.16 mm (s.d.) in overall translation and 0.29 deg. ±0.11 deg. (s.d.) in overall rotation. The total targeting errors were 0.34±0.16 mm (s.d.), 0.40±0.2 mm (s.d.), and 0.51±0.26 mm (s.d.) for the targets at the distances of 2, 6, and 10 cm from the rotation center, respectively. The computation time was less than 3 s on a computer with an Intel Pentium 3.0 GHz dual processor

  15. Metabolic approach for tumor delineation in glioma surgery: 3D MR spectroscopy image-guided resection.

    Zhang, Jie; Zhuang, Dong-Xiao; Yao, Cheng-Jun; Lin, Ching-Po; Wang, Tian-Liang; Qin, Zhi-Yong; Wu, Jin-Song

    2016-06-01

    OBJECT The extent of resection is one of the most essential factors that influence the outcomes of glioma resection. However, conventional structural imaging has failed to accurately delineate glioma margins because of tumor cell infiltration. Three-dimensional proton MR spectroscopy ((1)H-MRS) can provide metabolic information and has been used in preoperative tumor differentiation, grading, and radiotherapy planning. Resection based on glioma metabolism information may provide for a more extensive resection and yield better outcomes for glioma patients. In this study, the authors attempt to integrate 3D (1)H-MRS into neuronavigation and assess the feasibility and validity of metabolically based glioma resection. METHODS Choline (Cho)-N-acetylaspartate (NAA) index (CNI) maps were calculated and integrated into neuronavigation. The CNI thresholds were quantitatively analyzed and compared with structural MRI studies. Glioma resections were performed under 3D (1)H-MRS guidance. Volumetric analyses were performed for metabolic and structural images from a low-grade glioma (LGG) group and high-grade glioma (HGG) group. Magnetic resonance imaging and neurological assessments were performed immediately after surgery and 1 year after tumor resection. RESULTS Fifteen eligible patients with primary cerebral gliomas were included in this study. Three-dimensional (1)H-MRS maps were successfully coregistered with structural images and integrated into navigational system. Volumetric analyses showed that the differences between the metabolic volumes with different CNI thresholds were statistically significant (p MRS maps and intraoperative navigation for glioma margin delineation. Optimum CNI thresholds were applied for both LGGs and HGGs to achieve resection. The results indicated that 3D (1)H-MRS can be integrated with structural imaging to provide better outcomes for glioma resection. PMID:26636387

  16. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan. Advances and obstacles

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. (author)

  17. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan: advances and obstacles.

    Ohno, Tatsuya; Toita, Takafumi; Tsujino, Kayoko; Uchida, Nobue; Hatano, Kazuo; Nishimura, Tetsuo; Ishikura, Satoshi

    2015-11-01

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. PMID:26265660

  18. Treatment Planning for Image-Guided Neuro-Vascular Interventions Using Patient-Specific 3D Printed Phantoms

    Russ, M; O’Hara, R.; Setlur Nagesh, S.V.; Mokin, M.; Jimenez, C.; Siddiqui, A.; Bednarek, D.; Rudin, S; C. Ionita

    2015-01-01

    Minimally invasive endovascular image-guided interventions (EIGIs) are the preferred procedures for treatment of a wide range of vascular disorders. Despite benefits including reduced trauma and recovery time, EIGIs have their own challenges. Remote catheter actuation and challenging anatomical morphology may lead to erroneous endovascular device selections, delays or even complications such as vessel injury. EIGI planning using 3D phantoms would allow interventionists to become familiarized ...

  19. Curve-based 2D-3D registration of coronary vessels for image guided procedure

    Duong, Luc; Liao, Rui; Sundar, Hari; Tailhades, Benoit; Meyer, Andreas; Xu, Chenyang

    2009-02-01

    3D roadmap provided by pre-operative volumetric data that is aligned with fluoroscopy helps visualization and navigation in Interventional Cardiology (IC), especially when contrast agent-injection used to highlight coronary vessels cannot be systematically used during the whole procedure, or when there is low visibility in fluoroscopy for partially or totally occluded vessels. The main contribution of this work is to register pre-operative volumetric data with intraoperative fluoroscopy for specific vessel(s) occurring during the procedure, even without contrast agent injection, to provide a useful 3D roadmap. In addition, this study incorporates automatic ECG gating for cardiac motion. Respiratory motion is identified by rigid body registration of the vessels. The coronary vessels are first segmented from a multislice computed tomography (MSCT) volume and correspondent vessel segments are identified on a single gated 2D fluoroscopic frame. Registration can be explicitly constrained using one or multiple branches of a contrast-enhanced vessel tree or the outline of guide wire used to navigate during the procedure. Finally, the alignment problem is solved by Iterative Closest Point (ICP) algorithm. To be computationally efficient, a distance transform is computed from the 2D identification of each vessel such that distance is zero on the centerline of the vessel and increases away from the centerline. Quantitative results were obtained by comparing the registration of random poses and a ground truth alignment for 5 datasets. We conclude that the proposed method is promising for accurate 2D-3D registration, even for difficult cases of occluded vessel without injection of contrast agent.

  20. SU-E-T-376: 3-D Commissioning for An Image-Guided Small Animal Micro- Irradiation Platform

    Purpose: A 3-D radiochromic plastic dosimeter has been used to cross-test the isocentricity of a high resolution image-guided small animal microirradiation platform. In this platform, the mouse stage rotating for cone beam CT imaging is perpendicular to the gantry rotation for sub-millimeter radiation delivery. A 3-D dosimeter can be used to verify both imaging and irradiation coordinates. Methods: A 3-D dosimeter and optical CT scanner were used in this study. In the platform, both mouse stage and gantry can rotate 360° with rotation axis perpendicular to each other. Isocentricity and coincidence of mouse stage and gantry rotations were evaluated using star patterns. A 3-D dosimeter was placed on mouse stage with center at platform isocenter approximately. For CBCT isocentricity, with gantry moved to 90°, the mouse stage rotated horizontally while the x-ray was delivered to the dosimeter at certain angles. For irradiation isocentricity, the gantry rotated 360° to deliver beams to the dosimeter at certain angles for star patterns. The uncertainties and agreement of both CBCT and irradiation isocenters can be determined from the star patterns. Both procedures were repeated 3 times using 3 dosimeters to determine short-term reproducibility. Finally, dosimeters were scanned using optical CT scanner to obtain the results. Results: The gantry isocentricity is 0.9 ± 0.1 mm and mouse stage rotation isocentricity is about 0.91 ± 0.11 mm. Agreement between the measured isocenters of irradiation and imaging coordinates was determined. The short-term reproducibility test yielded 0.5 ± 0.1 mm between the imaging isocenter and the irradiation isocenter, with a maximum displacement of 0.7 ± 0.1 mm. Conclusion: The 3-D dosimeter can be very useful in precise verification of targeting for a small animal irradiation research. In addition, a single 3-D dosimeter can provide information in both geometric and dosimetric uncertainty, which is crucial for translational studies

  1. SU-E-T-376: 3-D Commissioning for An Image-Guided Small Animal Micro- Irradiation Platform

    Qian, X; Wuu, C [Columbia University, NY, NY (United States); Admovics, J [Rider University, Lawrencsville, NJ (United States)

    2014-06-01

    Purpose: A 3-D radiochromic plastic dosimeter has been used to cross-test the isocentricity of a high resolution image-guided small animal microirradiation platform. In this platform, the mouse stage rotating for cone beam CT imaging is perpendicular to the gantry rotation for sub-millimeter radiation delivery. A 3-D dosimeter can be used to verify both imaging and irradiation coordinates. Methods: A 3-D dosimeter and optical CT scanner were used in this study. In the platform, both mouse stage and gantry can rotate 360° with rotation axis perpendicular to each other. Isocentricity and coincidence of mouse stage and gantry rotations were evaluated using star patterns. A 3-D dosimeter was placed on mouse stage with center at platform isocenter approximately. For CBCT isocentricity, with gantry moved to 90°, the mouse stage rotated horizontally while the x-ray was delivered to the dosimeter at certain angles. For irradiation isocentricity, the gantry rotated 360° to deliver beams to the dosimeter at certain angles for star patterns. The uncertainties and agreement of both CBCT and irradiation isocenters can be determined from the star patterns. Both procedures were repeated 3 times using 3 dosimeters to determine short-term reproducibility. Finally, dosimeters were scanned using optical CT scanner to obtain the results. Results: The gantry isocentricity is 0.9 ± 0.1 mm and mouse stage rotation isocentricity is about 0.91 ± 0.11 mm. Agreement between the measured isocenters of irradiation and imaging coordinates was determined. The short-term reproducibility test yielded 0.5 ± 0.1 mm between the imaging isocenter and the irradiation isocenter, with a maximum displacement of 0.7 ± 0.1 mm. Conclusion: The 3-D dosimeter can be very useful in precise verification of targeting for a small animal irradiation research. In addition, a single 3-D dosimeter can provide information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  2. Treatment planning for image-guided neuro-vascular interventions using patient-specific 3D printed phantoms

    Russ, M.; O'Hara, R.; Setlur Nagesh, S. V.; Mokin, M.; Jimenez, C.; Siddiqui, A.; Bednarek, D.; Rudin, S.; Ionita, C.

    2015-03-01

    Minimally invasive endovascular image-guided interventions (EIGIs) are the preferred procedures for treatment of a wide range of vascular disorders. Despite benefits including reduced trauma and recovery time, EIGIs have their own challenges. Remote catheter actuation and challenging anatomical morphology may lead to erroneous endovascular device selections, delays or even complications such as vessel injury. EIGI planning using 3D phantoms would allow interventionists to become familiarized with the patient vessel anatomy by first performing the planned treatment on a phantom under standard operating protocols. In this study the optimal workflow to obtain such phantoms from 3D data for interventionist to practice on prior to an actual procedure was investigated. Patientspecific phantoms and phantoms presenting a wide range of challenging geometries were created. Computed Tomographic Angiography (CTA) data was uploaded into a Vitrea 3D station which allows segmentation and resulting stereo-lithographic files to be exported. The files were uploaded using processing software where preloaded vessel structures were included to create a closed-flow vasculature having structural support. The final file was printed, cleaned, connected to a flow loop and placed in an angiographic room for EIGI practice. Various Circle of Willis and cardiac arterial geometries were used. The phantoms were tested for ischemic stroke treatment, distal catheter navigation, aneurysm stenting and cardiac imaging under angiographic guidance. This method should allow for adjustments to treatment plans to be made before the patient is actually in the procedure room and enabling reduced risk of peri-operative complications or delays.

  3. Involved-Site Image-Guided Intensity Modulated Versus 3D Conformal Radiation Therapy in Early Stage Supradiaphragmatic Hodgkin Lymphoma

    Purpose: Image-guided intensity modulated radiation therapy (IG-IMRT) allows for margin reduction and highly conformal dose distribution, with consistent advantages in sparing of normal tissues. The purpose of this retrospective study was to compare involved-site IG-IMRT with involved-site 3D conformal RT (3D-CRT) in the treatment of early stage Hodgkin lymphoma (HL) involving the mediastinum, with efficacy and toxicity as primary clinical endpoints. Methods and Materials: We analyzed 90 stage IIA HL patients treated with either involved-site 3D-CRT or IG-IMRT between 2005 and 2012 in 2 different institutions. Inclusion criteria were favorable or unfavorable disease (according to European Organization for Research and Treatment of Cancer criteria), complete response after 3 to 4 cycles of an adriamycin- bleomycin-vinblastine-dacarbazine (ABVD) regimen plus 30 Gy as total radiation dose. Exclusion criteria were chemotherapy other than ABVD, partial response after ABVD, total radiation dose other than 30 Gy. Clinical endpoints were relapse-free survival (RFS) and acute toxicity. Results: Forty-nine patients were treated with 3D-CRT (54.4%) and 41 with IG-IMRT (45.6%). Median follow-up time was 54.2 months for 3D-CRT and 24.1 months for IG-IMRT. No differences in RFS were observed between the 2 groups, with 1 relapse each. Three-year RFS was 98.7% for 3D-CRT and 100% for IG-IMRT. Grade 2 toxicity events, mainly mucositis, were recorded in 32.7% of 3D-CRT patients (16 of 49) and in 9.8% of IG-IMRT patients (4 of 41). IG-IMRT was significantly associated with a lower incidence of grade 2 acute toxicity (P=.043). Conclusions: RFS rates at 3 years were extremely high in both groups, albeit the median follow-up time is different. Acute tolerance profiles were better for IG-IMRT than for 3D-CRT. Our preliminary results support the clinical safety and efficacy of advanced RT planning and delivery techniques in patients affected with early stage HL, achieving complete

  4. Involved-Site Image-Guided Intensity Modulated Versus 3D Conformal Radiation Therapy in Early Stage Supradiaphragmatic Hodgkin Lymphoma

    Filippi, Andrea Riccardo, E-mail: andreariccardo.filippi@unito.it [Department of Oncology, University of Torino, Torino (Italy); Ciammella, Patrizia [Radiation Therapy Unit, Department of Oncology and Advanced Technology, ASMN Hospital IRCCS, Reggio Emilia (Italy); Piva, Cristina; Ragona, Riccardo [Department of Oncology, University of Torino, Torino (Italy); Botto, Barbara [Hematology, Città della Salute e della Scienza, Torino (Italy); Gavarotti, Paolo [Hematology, University of Torino and Città della Salute e della Scienza, Torino (Italy); Merli, Francesco [Hematology Unit, ASMN Hospital IRCCS, Reggio Emilia (Italy); Vitolo, Umberto [Hematology, Città della Salute e della Scienza, Torino (Italy); Iotti, Cinzia [Radiation Therapy Unit, Department of Oncology and Advanced Technology, ASMN Hospital IRCCS, Reggio Emilia (Italy); Ricardi, Umberto [Department of Oncology, University of Torino, Torino (Italy)

    2014-06-01

    Purpose: Image-guided intensity modulated radiation therapy (IG-IMRT) allows for margin reduction and highly conformal dose distribution, with consistent advantages in sparing of normal tissues. The purpose of this retrospective study was to compare involved-site IG-IMRT with involved-site 3D conformal RT (3D-CRT) in the treatment of early stage Hodgkin lymphoma (HL) involving the mediastinum, with efficacy and toxicity as primary clinical endpoints. Methods and Materials: We analyzed 90 stage IIA HL patients treated with either involved-site 3D-CRT or IG-IMRT between 2005 and 2012 in 2 different institutions. Inclusion criteria were favorable or unfavorable disease (according to European Organization for Research and Treatment of Cancer criteria), complete response after 3 to 4 cycles of an adriamycin- bleomycin-vinblastine-dacarbazine (ABVD) regimen plus 30 Gy as total radiation dose. Exclusion criteria were chemotherapy other than ABVD, partial response after ABVD, total radiation dose other than 30 Gy. Clinical endpoints were relapse-free survival (RFS) and acute toxicity. Results: Forty-nine patients were treated with 3D-CRT (54.4%) and 41 with IG-IMRT (45.6%). Median follow-up time was 54.2 months for 3D-CRT and 24.1 months for IG-IMRT. No differences in RFS were observed between the 2 groups, with 1 relapse each. Three-year RFS was 98.7% for 3D-CRT and 100% for IG-IMRT. Grade 2 toxicity events, mainly mucositis, were recorded in 32.7% of 3D-CRT patients (16 of 49) and in 9.8% of IG-IMRT patients (4 of 41). IG-IMRT was significantly associated with a lower incidence of grade 2 acute toxicity (P=.043). Conclusions: RFS rates at 3 years were extremely high in both groups, albeit the median follow-up time is different. Acute tolerance profiles were better for IG-IMRT than for 3D-CRT. Our preliminary results support the clinical safety and efficacy of advanced RT planning and delivery techniques in patients affected with early stage HL, achieving complete

  5. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  6. Heterodyne 3D ghost imaging

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  7. Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery.

    Nosrati, Masoud S; Abugharbieh, Rafeef; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Hamarneh, Ghassan

    2016-01-01

    In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness. PMID:26151933

  8. Quantitative Assessment of Variational Surface Reconstruction from Sparse Point Clouds in Freehand 3D Ultrasound Imaging during Image-Guided Tumor Ablation

    Shuangcheng Deng

    2016-04-01

    Full Text Available Surface reconstruction for freehand 3D ultrasound is used to provide 3D visualization of a VOI (volume of interest during image-guided tumor ablation surgery. This is a challenge because the recorded 2D B-scans are not only sparse but also non-parallel. To solve this issue, we established a framework to reconstruct the surface of freehand 3D ultrasound imaging in 2011. The key technique for surface reconstruction in that framework is based on variational interpolation presented by Greg Turk for shape transformation and is named Variational Surface Reconstruction (VSR. The main goal of this paper is to evaluate the quality of surface reconstructions, especially when the input data are extremely sparse point clouds from freehand 3D ultrasound imaging, using four methods: Ball Pivoting, Power Crust, Poisson, and VSR. Four experiments are conducted, and quantitative metrics, such as the Hausdorff distance, are introduced for quantitative assessment. The experiment results show that the performance of the proposed VSR method is the best of the four methods at reconstructing surface from sparse data. The VSR method can produce a close approximation to the original surface from as few as two contours, whereas the other three methods fail to do so. The experiment results also illustrate that the reproducibility of the VSR method is the best of the four methods.

  9. 3D Imager and Method for 3D imaging

    Kumar, P.; Staszewski, R.; Charbon, E.

    2013-01-01

    3D imager comprising at least one pixel, each pixel comprising a photodetectorfor detecting photon incidence and a time-to-digital converter system configured for referencing said photon incidence to a reference clock, and further comprising a reference clock generator provided for generating the re

  10. SU-E-J-55: End-To-End Effectiveness Analysis of 3D Surface Image Guided Voluntary Breath-Holding Radiotherapy for Left Breast

    Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patient position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures

  11. SU-E-J-55: End-To-End Effectiveness Analysis of 3D Surface Image Guided Voluntary Breath-Holding Radiotherapy for Left Breast

    Lin, M; Feigenberg, S [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patient position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures

  12. 3D vector flow imaging

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  13. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 × 376 × 630 voxels. Conclusions: The proposed needle segmentation

  14. Advanced 3-D Ultrasound Imaging

    Rasmussen, Morten Fischer

    been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... and removes the need to integrate custom made electronics into the probe. A downside of row-column addressing 2-D arrays is the creation of secondary temporal lobes, or ghost echoes, in the point spread function. In the second part of the scientific contributions, row-column addressing of 2-D arrays...... was investigated. An analysis of how the ghost echoes can be attenuated was presented.Attenuating the ghost echoes were shown to be achieved by minimizing the first derivative of the apodization function. In the literature, a circular symmetric apodization function was proposed. A new apodization layout...

  15. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  16. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    Bertrand, M.M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Macri, F.; Beregi, J.P. [Nimes University Hospital, University Montpellier 1, Radiology Department, Nimes (France); Mazars, R.; Prudhomme, M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Nimes University Hospital, University Montpellier 1, Digestive Surgery Department, Nimes (France); Droupy, S. [Nimes University Hospital, University Montpellier 1, Urology-Andrology Department, Nimes (France)

    2014-08-15

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  17. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    Qiu Wu [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Yuchi Ming; Ding Mingyue [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Tessier, David; Fenster, Aaron [Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario N6A 5K8 (Canada)

    2013-04-15

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions

  18. Improvement in toxicity in high risk prostate cancer patients treated with image-guided intensity-modulated radiotherapy compared to 3D conformal radiotherapy without daily image guidance

    Image-guided radiotherapy (IGRT) facilitates the delivery of a very precise radiation dose. In this study we compare the toxicity and biochemical progression-free survival between patients treated with daily image-guided intensity-modulated radiotherapy (IG-IMRT) and 3D conformal radiotherapy (3DCRT) without daily image guidance for high risk prostate cancer (PCa). A total of 503 high risk PCa patients treated with radiotherapy (RT) and endocrine treatment between 2000 and 2010 were retrospectively reviewed. 115 patients were treated with 3DCRT, and 388 patients were treated with IG-IMRT. 3DCRT patients were treated to 76 Gy and without daily image guidance and with 1–2 cm PTV margins. IG-IMRT patients were treated to 78 Gy based on daily image guidance of fiducial markers, and the PTV margins were 5–7 mm. Furthermore, the dose-volume constraints to both the rectum and bladder were changed with the introduction of IG-IMRT. The 2-year actuarial likelihood of developing grade > = 2 GI toxicity following RT was 57.3% in 3DCRT patients and 5.8% in IG-IMRT patients (p < 0.001). For GU toxicity the numbers were 41.8% and 29.7%, respectively (p = 0.011). On multivariate analysis, 3DCRT was associated with a significantly increased risk of developing grade > = 2 GI toxicity compared to IG-IMRT (p < 0.001, HR = 11.59 [CI: 6.67-20.14]). 3DCRT was also associated with an increased risk of developing GU toxicity compared to IG-IMRT. The 3-year actuarial biochemical progression-free survival probability was 86.0% for 3DCRT and 90.3% for IG-IMRT (p = 0.386). On multivariate analysis there was no difference in biochemical progression-free survival between 3DCRT and IG-IMRT. The difference in toxicity can be attributed to the combination of the IMRT technique with reduced dose to organs-at-risk, daily image guidance and margin reduction

  19. Projector-Based Augmented Reality for Intuitive Intraoperative Guidance in Image-Guided 3D Interstitial Brachytherapy

    Purpose: The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. Methods and Materials: The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. Results: In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). Conclusions: The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation

  20. 3D Chaotic Functions for Image Encryption

    Pawan N. Khade

    2012-05-01

    Full Text Available This paper proposes the chaotic encryption algorithm based on 3D logistic map, 3D Chebyshev map, and 3D, 2D Arnolds cat map for color image encryption. Here the 2D Arnolds cat map is used for image pixel scrambling and 3D Arnolds cat map is used for R, G, and B component substitution. 3D Chebyshev map is used for key generation and 3D logistic map is used for image scrambling. The use of 3D chaotic functions in the encryption algorithm provide more security by using the, shuffling and substitution to the encrypted image. The Chebyshev map is used for public key encryption and distribution of generated private keys.

  1. The impact of 3D image guided prostate brachytherapy on therapeutic ratio: the Quebec University Hospital experience

    Purpose: to evaluate the impact of adaptative image-guided brachytherapy on therapeutic outcome and toxicity in prostate cancer. Materials and methods: the 1110 first patients treated at the C.H.U.Q.-l'Hotel-Dieu de Quebec were divided in five groups depending on the technique used for the implantation, the latest being intra operative treatment planning. Biochemical disease free survival (5-b.D.F.S.), toxicities and dosimetric parameters were compared between the groups. Results: 5-b.D.F.S. (A.S.T.R.O. + Houston) were of 88.5% and 90.5% for the whole cohort. The use of intra operative treatment planning resulted in better dosimetric parameters. Clinically, this resulted in a decreased use of urethral catheterization, from 18.8% in group 1 to 5.2% in group 5, and in a reduction in severe acute urinary side effects (21.3 vs 33.3% P = 0.01) when compared with pre-planning. There was also less late gastrointestinal side effects (groups 5 vs 1: 26.6 vs 43.2% P < 0.05). Finally, when compared with pre-planning, intra operative treatment planning was associated with a smaller reduction between planned D90 and the dose calculated at the CT scan 1 month after the implant (38 vs 66 Gy). Conclusion: the evolution of prostate brachytherapy technique toward intra operative treatment planning allowed dosimetric gains which resulted in significant clinical benefits by increasing the therapeutic ratio mainly through a decreased urinary toxicity. A longer follow-up will answer the question whether there is an impact on 5-b.D.F.S.. (authors)

  2. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm3 or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each tumor was

  3. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    Martin, Peter R., E-mail: pmarti46@uwo.ca [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Cool, Derek W. [Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7, Canada and Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Romagnoli, Cesare [Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Fenster, Aaron [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Ward, Aaron D. [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7 (Canada)

    2014-07-15

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each

  4. 3D Reconstruction of NMR Images

    Peter Izak; Milan Smetana; Libor Hargas; Miroslav Hrianka; Pavol Spanik

    2007-01-01

    This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  5. 3D ultrafast ultrasound imaging in vivo

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability. (fast track communication)

  6. 3D-printed guiding templates for improved osteosarcoma resection

    Ma, Limin; Zhou, Ye; Zhu, Ye; Lin, Zefeng; Wang, Yingjun; Zhang, Yu; Xia, Hong; Mao, Chuanbin

    2016-03-01

    Osteosarcoma resection is challenging due to the variable location of tumors and their proximity with surrounding tissues. It also carries a high risk of postoperative complications. To overcome the challenge in precise osteosarcoma resection, computer-aided design (CAD) was used to design patient-specific guiding templates for osteosarcoma resection on the basis of the computer tomography (CT) scan and magnetic resonance imaging (MRI) of the osteosarcoma of human patients. Then 3D printing technique was used to fabricate the guiding templates. The guiding templates were used to guide the osteosarcoma surgery, leading to more precise resection of the tumorous bone and the implantation of the bone implants, less blood loss, shorter operation time and reduced radiation exposure during the operation. Follow-up studies show that the patients recovered well to reach a mean Musculoskeletal Tumor Society score of 27.125.

  7. Dosimetric analysis of 3D image-guided HDR brachytherapy planning for the treatment of cervical cancer: is point A-based dose prescription still valid in image-guided brachytherapy?

    Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M Saiful

    2011-01-01

    The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 ± 4.3 Gy. This is significantly higher (p IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities. PMID:20488690

  8. Phenotypic transition maps of 3D breast acini obtained by imaging-guided agent-based modeling

    Tang, Jonathan; Enderling, Heiko; Becker-Weimann, Sabine; Pham, Christopher; Polyzos, Aris; Chen, Chen-Yi; Costes, Sylvain V

    2011-02-18

    We introduce an agent-based model of epithelial cell morphogenesis to explore the complex interplay between apoptosis, proliferation, and polarization. By varying the activity levels of these mechanisms we derived phenotypic transition maps of normal and aberrant morphogenesis. These maps identify homeostatic ranges and morphologic stability conditions. The agent-based model was parameterized and validated using novel high-content image analysis of mammary acini morphogenesis in vitro with focus on time-dependent cell densities, proliferation and death rates, as well as acini morphologies. Model simulations reveal apoptosis being necessary and sufficient for initiating lumen formation, but cell polarization being the pivotal mechanism for maintaining physiological epithelium morphology and acini sphericity. Furthermore, simulations highlight that acinus growth arrest in normal acini can be achieved by controlling the fraction of proliferating cells. Interestingly, our simulations reveal a synergism between polarization and apoptosis in enhancing growth arrest. After validating the model with experimental data from a normal human breast line (MCF10A), the system was challenged to predict the growth of MCF10A where AKT-1 was overexpressed, leading to reduced apoptosis. As previously reported, this led to non growth-arrested acini, with very large sizes and partially filled lumen. However, surprisingly, image analysis revealed a much lower nuclear density than observed for normal acini. The growth kinetics indicates that these acini grew faster than the cells comprising it. The in silico model could not replicate this behavior, contradicting the classic paradigm that ductal carcinoma in situ is only the result of high proliferation and low apoptosis. Our simulations suggest that overexpression of AKT-1 must also perturb cell-cell and cell-ECM communication, reminding us that extracellular context can dictate cellular behavior.

  9. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  10. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J. [Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Bioinformatics, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Computer Science, Rutgers, State University of New Jersey, Piscataway, New Jersey 08854 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States)

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  11. 3D Reconstruction of NMR Images

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  12. Multiplane 3D superresolution optical fluctuation imaging

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  13. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  14. Designing 3D Mesenchymal Stem Cell Sheets Merging Magnetic and Fluorescent Features: When Cell Sheet Technology Meets Image-Guided Cell Therapy

    Rahmi, Gabriel; Pidial, Laetitia; Silva, Amanda K. A.; Blondiaux, Eléonore; Meresse, Bertrand; Gazeau, Florence; Autret, Gwennhael; Balvay, Daniel; Cuenod, Charles André; Perretta, Silvana; Tavitian, Bertrand; Wilhelm, Claire; Cellier, Christophe; Clément, Olivier

    2016-01-01

    Cell sheet technology opens new perspectives in tissue regeneration therapy by providing readily implantable, scaffold-free 3D tissue constructs. Many studies have focused on the therapeutic effects of cell sheet implantation while relatively little attention has concerned the fate of the implanted cells in vivo. The aim of the present study was to track longitudinally the cells implanted in the cell sheets in vivo in target tissues. To this end we (i) endowed bone marrow-derived mesenchymal stem cells (BMMSCs) with imaging properties by double labeling with fluorescent and magnetic tracers, (ii) applied BMMSC cell sheets to a digestive fistula model in mice, (iii) tracked the BMMSC fate in vivo by MRI and probe-based confocal laser endomicroscopy (pCLE), and (iv) quantified healing of the fistula. We show that image-guided longitudinal follow-up can document both the fate of the cell sheet-derived BMMSCs and their healing capacity. Moreover, our theranostic approach informs on the mechanism of action, either directly by integration of cell sheet-derived BMMSCs into the host tissue or indirectly through the release of signaling molecules in the host tissue. Multimodal imaging and clinical evaluation converged to attest that cell sheet grafting resulted in minimal clinical inflammation, improved fistula healing, reduced tissue fibrosis and enhanced microvasculature density. At the molecular level, cell sheet transplantation induced an increase in the expression of anti-inflammatory cytokines (TGF-ß2 and IL-10) and host intestinal growth factors involved in tissue repair (EGF and VEGF). Multimodal imaging is useful for tracking cell sheets and for noninvasive follow-up of their regenerative properties. PMID:27022420

  15. Miniaturized 3D microscope imaging system

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  16. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  17. 3D Stereo Visualization for Mobile Robot Tele-Guide

    Livatino, Salvatore

    2006-01-01

    learning and decision performance. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks, while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work...... intends to contribute to this aspect by investigating stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. The purpose of this work is also to investigate how user performance may vary when employing different display......The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. In particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, as well as faster system...

  18. ICER-3D Hyperspectral Image Compression Software

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  19. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    Yakang Dai; Jian Zheng; Yuetao Yang; Duojie Kuai; Xiaodong Yang

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume render...

  20. Acquisition and applications of 3D images

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  1. Heat Equation to 3D Image Segmentation

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  2. 3D camera tracking from disparity images

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  3. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    Cambridge : The Electromagnetics Academy, 2010, s. 1043-1046. ISBN 978-1-934142-14-1. [PIERS 2010 Cambridge. Cambridge (US), 05.07.2010-08.07.2010] R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : 3D reconstruction * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  4. Feasibility of 3D harmonic contrast imaging

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; Cate, ten F.; Jong, de N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suit

  5. 3D Membrane Imaging and Porosity Visualization

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  6. 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping of......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....... treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  7. Backhoe 3D "gold standard" image

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  8. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  9. 3D in Photoshop The Ultimate Guide for Creative Professionals

    Gee, Zorana

    2010-01-01

    This is the first book of its kind that shows you everything you need to know to create or integrate 3D into your designs using Photoshop CS5 Extended. If you are completely new to 3D, you'll find the great tips and tricks in 3D in Photoshop invaluable as you get started. There is also a wealth of detailed technical insight for those who want more. Written by the true experts - Adobe's own 3D team - and with contributions from some of the best and brightest digital artists working today, this reference guide will help you to create a comprehensive workflow that suits your specific needs. Along

  10. Metrological characterization of 3D imaging devices

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  11. 3D IMAGING USING COHERENT SYNCHROTRON RADIATION

    Peter Cloetens

    2011-05-01

    Full Text Available Three dimensional imaging is becoming a standard tool for medical, scientific and industrial applications. The use of modem synchrotron radiation sources for monochromatic beam micro-tomography provides several new features. Along with enhanced signal-to-noise ratio and improved spatial resolution, these include the possibility of quantitative measurements, the easy incorporation of special sample environment devices for in-situ experiments, and a simple implementation of phase imaging. These 3D approaches overcome some of the limitations of 2D measurements. They require new tools for image analysis.

  12. 3D Model Assisted Image Segmentation

    Jayawardena, Srimal; Hutter, Marcus

    2012-01-01

    The problem of segmenting a given image into coherent regions is important in Computer Vision and many industrial applications require segmenting a known object into its components. Examples include identifying individual parts of a component for process control work in a manufacturing plant and identifying parts of a car from a photo for automatic damage detection. Unfortunately most of an object's parts of interest in such applications share the same pixel characteristics, having similar colour and texture. This makes segmenting the object into its components a non-trivial task for conventional image segmentation algorithms. In this paper, we propose a "Model Assisted Segmentation" method to tackle this problem. A 3D model of the object is registered over the given image by optimising a novel gradient based loss function. This registration obtains the full 3D pose from an image of the object. The image can have an arbitrary view of the object and is not limited to a particular set of views. The segmentation...

  13. Micromachined Ultrasonic Transducers for 3-D Imaging

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... capable of producing 62+62-element row-column addressed CMUT arrays with negligible charging issues. The arrays include an integrated apodization, which reduces the ghost echoes produced by the edge waves in such arrays by 15:8 dB. The acoustical cross-talk is measured on fabricated arrays, showing a 24 d...

  14. Compact multi-projection 3D display system with light-guide projection.

    Lee, Chang-Kun; Park, Soon-gi; Moon, Seokil; Hong, Jong-Young; Lee, Byoungho

    2015-11-01

    We propose a compact multi-projection based multi-view 3D display system using an optical light-guide, and perform an analysis of the characteristics of the image for distortion compensation via an optically equivalent model of the light-guide. The projected image traveling through the light-guide experiences multiple total internal reflections at the interface. As a result, the projection distance in the horizontal direction is effectively reduced to the thickness of the light-guide, and the projection part of the multi-projection based multi-view 3D display system is minimized. In addition, we deduce an equivalent model of such a light-guide to simplify the analysis of the image distortion in the light-guide. From the equivalent model, the focus of the image is adjusted, and pre-distorted images for each projection unit are calculated by two-step image rectification in air and the material. The distortion-compensated view images are represented on the exit surface of the light-guide when the light-guide is located in the intended position. Viewing zones are generated by combining the light-guide projection system, a vertical diffuser, and a Fresnel lens. The feasibility of the proposed method is experimentally verified and a ten-view 3D display system with a minimized structure is implemented. PMID:26561163

  15. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment

  16. Image-guided installation of 3D-printed patient-specific implant and its application in pelvic tumor resection and reconstruction surgery.

    Chen, Xiaojun; Xu, Lu; Wang, Yiping; Hao, Yongqiang; Wang, Liao

    2016-03-01

    Nowadays, the diagnosis and treatment of pelvic sarcoma pose a major surgical challenge for reconstruction in orthopedics. With the development of manufacturing technology, the metal 3D-printed customized implants have brought revolution for the limb-salvage resection and reconstruction surgery. However, the tumor resection is not without risk and the precise implant placement is very difficult due to the anatomic intricacies of the pelvis. In this study, a surgical navigation system including the implant calibration algorithm has been developed, so that the surgical instruments and the 3D-printed customized implant can be tracked and rendered on the computer screen in real time, minimizing the risks and improving the precision of the surgery. Both the phantom experiment and the pilot clinical case study presented the feasibility of our computer-aided surgical navigation system. According to the accuracy evaluation experiment, the precision of customized implant installation can be improved three to five times (TRE: 0.75±0.18 mm) compared with the non-navigated implant installation after the guided osteotomy (TRE: 3.13±1.28 mm), which means it is sufficient to meet the clinical requirements of the pelvic reconstruction. However, more clinical trials will be conducted in the future work for the validation of the reliability and efficiency of our navigation system. PMID:26652978

  17. 3-D SAR image formation from sparse aperture data using 3-D target grids

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  18. 3D Buildings Extraction from Aerial Images

    Melnikova, O.; Prandi, F.

    2011-09-01

    This paper introduces a semi-automatic method for buildings extraction through multiple-view aerial image analysis. The advantage of the used semi-automatic approach is that it allows processing of each building individually finding the parameters of buildings features extraction more precisely for each area. On the early stage the presented technique uses an extraction of line segments that is done only inside of areas specified manually. The rooftop hypothesis is used further to determine a subset of quadrangles, which could form building roofs from a set of extracted lines and corners obtained on the previous stage. After collecting of all potential roof shapes in all images overlaps, the epipolar geometry is applied to find matching between images. This allows to make an accurate selection of building roofs removing false-positive ones and to identify their global 3D coordinates given camera internal parameters and coordinates. The last step of the image matching is based on geometrical constraints in contrast to traditional correlation. The correlation is applied only in some highly restricted areas in order to find coordinates more precisely, in such a way significantly reducing processing time of the algorithm. The algorithm has been tested on a set of Milan's aerial images and shows highly accurate results.

  19. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  20. Photogrammetric 3D reconstruction using mobile imaging

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  1. Helical CT scanner - 3D imaging and CT fluoroscopy

    It has been over twenty years since the introduction of X-ray CT. In recent years, the topic of helical scanning has dominated the area of technical development. With helical scanning now being used routinely, the traditional concept of the X-ray CT as a device for obtaining axial images of the body in slices has given way to that of one for obtaining images in volumes. For instance, the ability of helical scanning to acquire sequential images in the direction of the body axis makes it ideal for creating three dimensional (3-D) images, and has in fact led to the use of 3-D images in clinical practice. In addition, with helical scanning, imaging of organs such as the liver or lung can be performed in several tens of seconds, as opposed to a few minutes that it used to take. This has resulted not only in reduced time for the patient to spend under constraint for imaging but also to changes in diagnostic methods. The question, 'Would it be possible to perform reconstruction while scanning and to see resulting images in real time ?' is another issue which has been taken up, and it has been answered by CT Fluoroscopy. It makes it possible to see CT images in real time during sequential scanning, and from this development, applications such as CT-guided biopsy and CT-navigated surgery has been investigated and have been realized. Other possibilities to create a whole new series of diagnostic methods and results. (author)

  2. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  3. 3D Guided Wave Motion Analysis on Laminated Composites

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  4. Vision-Guided Robot Control for 3D Object Recognition and Manipulation

    S. Q. Xie; Haemmerle, E.; Cheng, Y; Gamage, P

    2008-01-01

    Research into a fully automated vision-guided robot for identifying, visualising and manipulating 3D objects with complicated shapes is still undergoing major development world wide. The current trend is toward the development of more robust, intelligent and flexible vision-guided robot systems to operate in highly dynamic environments. The theoretical basis of image plane dynamics and robust image-based robot systems capable of manipulating moving objects still need further research. Researc...

  5. Handbook of 3D machine vision optical metrology and imaging

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  6. Progress in 3D imaging and display by integral imaging

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  7. Perception of detail in 3D images

    Heyndrickx, I.; Kaptein, R.

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads t

  8. Performance assessment of 3D surface imaging technique for medical imaging applications

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  9. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  10. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  11. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  12. iClone 431 3D Animation Beginner's Guide

    McCallum, MD

    2011-01-01

    This book is a part of the Beginner's guide series, wherein you will quickly start doing tasks with precise instructions. Then the tasks will be followed by explanation and then a challenging task or a multiple choice question about the topic just covered. Do you have a story to tell or an idea to illustrate? This book is aimed at film makers, video producers/compositors, vxf artists or 3D artists/designers like you who have no previous experience with iClone. If you have that drive inside you to entertain people via the internet on sites like YouTube or Vimeo, create a superb presentation vid

  13. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  14. Computer-based image analysis in radiological diagnostics and image-guided therapy 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    Beier, J

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the softw...

  15. A 3D image analysis tool for SPECT imaging

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  16. Light field display and 3D image reconstruction

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  17. 3D Imaging with Structured Illumination for Advanced Security Applications

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  18. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas; Bai, Li

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized...

  19. 3D augmented reality with integral imaging display

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  20. 3D Interpolation Method for CT Images of the Lung

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  1. Prospective comparison of T2w-MRI and dynamic-contrast-enhanced MRI, 3D-MR spectroscopic imaging or diffusion-weighted MRI in repeat TRUS-guided biopsies

    Portalez, Daniel [Clinique Pasteur, 45, Department of Radiology, Toulouse (France); Rollin, Gautier; Mouly, Patrick; Jonca, Frederic; Malavaud, Bernard [Hopital de Rangueil, Department of Urology, Toulouse Cedex 9 (France); Leandri, Pierre [Clinique Saint Jean, 20, Department of Urology, Toulouse (France); Elman, Benjamin [Clinique Pasteur, 45, Department of Urology, Toulouse (France)

    2010-12-15

    To compare T2-weighted MRI and functional MRI techniques in guiding repeat prostate biopsies. Sixty-eight patients with a history of negative biopsies, negative digital rectal examination and elevated PSA were imaged before repeat biopsies. Dichotomous criteria were used with visual validation of T2-weighted MRI, dynamic contrast-enhanced MRI and literature-derived cut-offs for 3D-spectroscopy MRI (choline-creatine-to-citrate ratio >0.86) and diffusion-weighted imaging (ADC x 10{sup 3} mm{sup 2}/s < 1.24). For each segment and MRI technique, results were rendered as being suspicious/non-suspicious for malignancy. Sextant biopsies, transition zone biopsies and at least two additional biopsies of suspicious areas were taken. In the peripheral zones, 105/408 segments and in the transition zones 19/136 segments were suspicious according to at least one MRI technique. A total of 28/68 (41.2%) patients were found to have cancer. Diffusion-weighted imaging exhibited the highest positive predictive value (0.52) compared with T2-weighted MRI (0.29), dynamic contrast-enhanced MRI (0.33) and 3D-spectroscopy MRI (0.25). Logistic regression showed the probability of cancer in a segment increasing 12-fold when T2-weighted and diffusion-weighted imaging MRI were both suspicious (63.4%) compared with both being non-suspicious (5.2%). The proposed system of analysis and reporting could prove clinically relevant in the decision whether to repeat targeted biopsies. (orig.)

  2. Prospective comparison of T2w-MRI and dynamic-contrast-enhanced MRI, 3D-MR spectroscopic imaging or diffusion-weighted MRI in repeat TRUS-guided biopsies

    To compare T2-weighted MRI and functional MRI techniques in guiding repeat prostate biopsies. Sixty-eight patients with a history of negative biopsies, negative digital rectal examination and elevated PSA were imaged before repeat biopsies. Dichotomous criteria were used with visual validation of T2-weighted MRI, dynamic contrast-enhanced MRI and literature-derived cut-offs for 3D-spectroscopy MRI (choline-creatine-to-citrate ratio >0.86) and diffusion-weighted imaging (ADC x 103 mm2/s < 1.24). For each segment and MRI technique, results were rendered as being suspicious/non-suspicious for malignancy. Sextant biopsies, transition zone biopsies and at least two additional biopsies of suspicious areas were taken. In the peripheral zones, 105/408 segments and in the transition zones 19/136 segments were suspicious according to at least one MRI technique. A total of 28/68 (41.2%) patients were found to have cancer. Diffusion-weighted imaging exhibited the highest positive predictive value (0.52) compared with T2-weighted MRI (0.29), dynamic contrast-enhanced MRI (0.33) and 3D-spectroscopy MRI (0.25). Logistic regression showed the probability of cancer in a segment increasing 12-fold when T2-weighted and diffusion-weighted imaging MRI were both suspicious (63.4%) compared with both being non-suspicious (5.2%). The proposed system of analysis and reporting could prove clinically relevant in the decision whether to repeat targeted biopsies. (orig.)

  3. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume; Dufait, Remi; Jensen, Jørgen Arendt

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. ...

  4. Preliminary examples of 3D vector flow imaging

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev;

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental...... ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal...... acquisition as opposed to magnetic resonance imaging (MRI). The results demonstrate that the 3D TO method is capable of performing 3D vector flow imaging....

  5. Highway 3D model from image and lidar data

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  6. Diffractive optical element for creating visual 3D images.

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  7. 3-D capacitance density imaging system

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  8. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  9. 3D-LSI technology for image sensor

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  10. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    2010-01-01

    Roč. 6, č. 7 (2010), s. 617-620. ISSN 1931-7360 R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : reconstruction methods * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  11. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  12. Acoustic 3D imaging of dental structures

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  13. Robot-assisted 3D-TRUS guided prostate brachytherapy: System integration and validation

    Current transperineal prostate brachytherapy uses transrectal ultrasound (TRUS) guidance and a template at a fixed position to guide needles along parallel trajectories. However, pubic arch interference (PAI) with the implant path obstructs part of the prostate from being targeted by the brachytherapy needles along parallel trajectories. To solve the PAI problem, some investigators have explored other insertion trajectories than parallel, i.e., oblique. However, parallel trajectory constraints in current brachytherapy procedure do not allow oblique insertion. In this paper, we describe a robot-assisted, three-dimensional (3D) TRUS guided approach to solve this problem. Our prototype consists of a commercial robot, and a 3D TRUS imaging system including an ultrasound machine, image acquisition apparatus and 3D TRUS image reconstruction, and display software. In our approach, we use the robot as a movable needle guide, i.e., the robot positions the needle before insertion, but the physician inserts the needle into the patient's prostate. In a later phase of our work, we will include robot insertion. By unifying the robot, ultrasound transducer, and the 3D TRUS image coordinate systems, the position of the template hole can be accurately related to 3D TRUS image coordinate system, allowing accurate and consistent insertion of the needle via the template hole into the targeted position in the prostate. The unification of the various coordinate systems includes two steps, i.e., 3D image calibration and robot calibration. Our testing of the system showed that the needle placement accuracy of the robot system at the 'patient's' skin position was 0.15 mm±0.06 mm, and the mean needle angulation error was 0.07 deg. . The fiducial localization error (FLE) in localizing the intersections of the nylon strings for image calibration was 0.13 mm, and the FLE in localizing the divots for robot calibration was 0.37 mm. The fiducial registration error for image calibration was 0

  14. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  15. A 3D Model Reconstruction Method Using Slice Images

    LI Hong-an; KANG Bao-sheng

    2013-01-01

    Aiming at achieving the high accuracy 3D model from slice images, a new model reconstruction method using slice im-ages is proposed. Wanting to extract the outermost contours from slice images, the method of the improved GVF-Snake model with optimized force field and ray method is employed. And then, the 3D model is reconstructed by contour connection using the im-proved shortest diagonal method and judgment function of contour fracture. The results show that the accuracy of reconstruction 3D model is improved.

  16. 3D Motion Parameters Determination Based on Binocular Sequence Images

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  17. The Essential Guide to 3D in Flash

    Olsson, Ronald A

    2010-01-01

    If you are an ActionScript developer or designer and you would like to work with 3D in Flash, this book is for you. You will learn the core Flash 3D concepts, using the open source Away3D engine as a primary tool. Once you have mastered these skills, you will be able to realize the possibilities that the available Flash 3D engines, languages, and technologies have to offer you with Flash and 3D.* Describes 3D concepts in theory and their implementation using Away3D* Dives right in to show readers how to quickly create an interactive, animated 3D scene, and builds on that experience throughout

  18. Morphometrics, 3D Imaging, and Craniofacial Development.

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  19. Software for 3D diagnostic image reconstruction and analysis

    Recent advances in computer technologies have opened new frontiers in medical diagnostics. Interesting possibilities are the use of three-dimensional (3D) imaging and the combination of images from different modalities. Software prepared in our laboratories devoted to 3D image reconstruction and analysis from computed tomography and ultrasonography is presented. In developing our software it was assumed that it should be applicable in standard medical practice, i.e. it should work effectively with a PC. An additional feature is the possibility of combining 3D images from different modalities. The reconstruction and data processing can be conducted using a standard PC, so low investment costs result in the introduction of advanced and useful diagnostic possibilities. The program was tested on a PC using DICOM data from computed tomography and TIFF files obtained from a 3D ultrasound system. The results of the anthropomorphic phantom and patient data were taken into consideration. A new approach was used to achieve spatial correlation of two independently obtained 3D images. The method relies on the use of four pairs of markers within the regions under consideration. The user selects the markers manually and the computer calculates the transformations necessary for coupling the images. The main software feature is the possibility of 3D image reconstruction from a series of two-dimensional (2D) images. The reconstructed 3D image can be: (1) viewed with the most popular methods of 3D image viewing, (2) filtered and processed to improve image quality, (3) analyzed quantitatively (geometrical measurements), and (4) coupled with another, independently acquired 3D image. The reconstructed and processed 3D image can be stored at every stage of image processing. The overall software performance was good considering the relatively low costs of the hardware used and the huge data sets processed. The program can be freely used and tested (source code and program available at

  20. BM3D Frames and Variational Image Deblurring

    Danielyan, Aram; Egiazarian, Karen

    2011-01-01

    A family of the Block Matching 3-D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patch-wise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem: one given by minimization of the single objective function and another based on the Nash equilibrium balance of two objective functions. The latter results in an algorithm where the denoising and deblurring operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the Nash equilibrium formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool.

  1. Image based 3D city modeling : Comparative study

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  2. 3D imaging of aortic aneurysma using spiral CT

    The use of 3D reconstructions (3D display technique and maximum intensity projection) in spiral CT for diagnostic evaluation of aortic aneurysma is explained. The data available showing 12 aneurysma of the abdominal and thoracic aorta (10 cases of aneurysma verum, 2 cases of aneurysma dissecans) were selected for verification of the value of 3D images in comparison to transversal displays of the CT. The 3D reconstructions of the spiral CT, other than the projection angiography, give insight into the vessel from various points of view. Such information is helpful for quickly gathering a picture of the volume and contours of a pathological process in the vessel. 3D post-processing of data is advisable if the comparison of tomograms and projection images produces findings of nuclear definition which need clarification prior to surgery. (orig.)

  3. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  4. Optical 3D watermark based digital image watermarking for telemedicine

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  5. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  6. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  7. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume;

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix...... phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique...... cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels...

  8. C-arm CT-guided 3D navigation of percutaneous interventions

    So far C-arm CT images were predominantly used for a precise guidance of an endovascular or intra-arterial therapy. A novel combined 3D-navigation C-arm system now also allows cross-sectional and fluoroscopy controlled interventions. Studies have reported about successful CT-image guided navigation with C-arm systems in vertebroplasty. Insertion of the radiofrequency ablation probe is also conceivable for lung and liver tumors that had been labelled with lipiodol. In the future C-arm CT based navigation systems will probably allow simplified and safer complex interventions and simultaneously reduce radiation exposure. (orig.)

  9. Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    Due to lack of imaging modalities to identify prostate cancer in vivo, current TRUS guided prostate biopsies are taken randomly. Consequently, many important cancers are missed during initial biopsies. The purpose of this study was to determine the potential clinical utility of a high-speed registration algorithm for a 3D prostate cancer atlas. This 3D prostate cancer atlas provides voxel-level likelihood of cancer and optimized biopsy locations on a template space (Zhan et al 2007). The atlas was constructed from 158 expert annotated, 3D reconstructed radical prostatectomy specimens outlined for cancers (Shen et al 2004). For successful clinical implementation, the prostate atlas needs to be registered to each patient's TRUS image with high registration accuracy in a time-efficient manner. This is implemented in a two-step procedure, the segmentation of the prostate gland from a patient's TRUS image followed by the registration of the prostate atlas. We have developed a fast registration algorithm suitable for clinical applications of this prostate cancer atlas. The registration algorithm was implemented on a graphical processing unit (GPU) to meet the critical processing speed requirements for atlas guided biopsy. A color overlay of the atlas superposed on the TRUS image was presented to help pick statistically likely regions known to harbor cancer. We validated our fast registration algorithm using computer simulations of two optimized 7- and 12-core biopsy protocols to maximize the overall detection rate. Using a GPU, patient's TRUS image segmentation and atlas registration took less than 12 s. The prostate cancer atlas guided 7- and 12-core biopsy protocols had cancer detection rates of 84.81% and 89.87% respectively when validated on the same set of data. Whereas the sextant biopsy approach without the utility of 3D cancer atlas detected only 70.5% of the cancers using the same histology data. We estimate 10-20% increase in prostate cancer detection rates

  10. Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    Narayanan, R; Suri, J S [Eigen Inc, Grass Valley, CA (United States); Werahera, P N; Barqawi, A; Crawford, E D [University of Colorado, Denver, CO (United States); Shinohara, K [University of California, San Francisco, CA (United States); Simoneau, A R [University of California, Irvine, CA (United States)], E-mail: jas.suri@eigen.com

    2008-10-21

    Due to lack of imaging modalities to identify prostate cancer in vivo, current TRUS guided prostate biopsies are taken randomly. Consequently, many important cancers are missed during initial biopsies. The purpose of this study was to determine the potential clinical utility of a high-speed registration algorithm for a 3D prostate cancer atlas. This 3D prostate cancer atlas provides voxel-level likelihood of cancer and optimized biopsy locations on a template space (Zhan et al 2007). The atlas was constructed from 158 expert annotated, 3D reconstructed radical prostatectomy specimens outlined for cancers (Shen et al 2004). For successful clinical implementation, the prostate atlas needs to be registered to each patient's TRUS image with high registration accuracy in a time-efficient manner. This is implemented in a two-step procedure, the segmentation of the prostate gland from a patient's TRUS image followed by the registration of the prostate atlas. We have developed a fast registration algorithm suitable for clinical applications of this prostate cancer atlas. The registration algorithm was implemented on a graphical processing unit (GPU) to meet the critical processing speed requirements for atlas guided biopsy. A color overlay of the atlas superposed on the TRUS image was presented to help pick statistically likely regions known to harbor cancer. We validated our fast registration algorithm using computer simulations of two optimized 7- and 12-core biopsy protocols to maximize the overall detection rate. Using a GPU, patient's TRUS image segmentation and atlas registration took less than 12 s. The prostate cancer atlas guided 7- and 12-core biopsy protocols had cancer detection rates of 84.81% and 89.87% respectively when validated on the same set of data. Whereas the sextant biopsy approach without the utility of 3D cancer atlas detected only 70.5% of the cancers using the same histology data. We estimate 10-20% increase in prostate cancer

  11. Advanced 3-D Ultrasound Imaging.:3-D Synthetic Aperture Imaging and Row-column Addressing of 2-D Transducer Arrays

    Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-01-01

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinic...

  12. Development of a 3D ultrasound-guided prostate biopsy system

    Cool, Derek; Sherebrin, Shi; Izawa, Jonathan; Fenster, Aaron

    2007-03-01

    Biopsy of the prostate using ultrasound guidance is the clinical gold standard for diagnosis of prostate adenocarinoma. However, because early stage tumors are rarely visible under US, the procedure carries high false-negative rates and often patients require multiple biopsies before cancer is detected. To improve cancer detection, it is imperative that throughout the biopsy procedure, physicians know where they are within the prostate and where they have sampled during prior biopsies. The current biopsy procedure is limited to using only 2D ultrasound images to find and record target biopsy core sample sites. This information leaves ambiguity as the physician tries to interpret the 2D information and apply it to their 3D workspace. We have developed a 3D ultrasound-guided prostate biopsy system that provides 3D intra-biopsy information to physicians for needle guidance and biopsy location recording. The system is designed to conform to the workflow of the current prostate biopsy procedure, making it easier for clinical integration. In this paper, we describe the system design and validate its accuracy by performing an in vitro biopsy procedure on US/CT multi-modal patient-specific prostate phantoms. A clinical sextant biopsy was performed by a urologist on the phantoms and the 3D models of the prostates were generated with volume errors less than 4% and mean boundary errors of less than 1 mm. Using the 3D biopsy system, needles were guided to within 1.36 +/- 0.83 mm of 3D targets and the position of the biopsy sites were accurately localized to 1.06 +/- 0.89 mm for the two prostates.

  13. Recovering 3D human pose from monocular images

    Agarwal, Ankur; Triggs, Bill

    2006-01-01

    We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We eva...

  14. 3D Medical Image Segmentation Based on Rough Set Theory

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  15. 3D Image Display Courses for Information Media Students.

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  16. A near field 3D radar imaging technique

    Broquetas Ibars, Antoni

    1993-01-01

    The paper presents an algorithm which recovers a 3D reflectivity image of a target from near-field scattering measurements. Spherical wave nearfield illumination is used, in order to avoid a costly compact range installation to produce a plane wave illumination. The system is described and some simulated 3D reconstructions are included. The paper also presents a first experimental validation of this technique. Peer Reviewed

  17. Investigation of the feasability for 3D synthetic aperture imaging

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2003-01-01

    This paper investigates the feasibility of implementing real-time synthetic aperture 3D imaging on the experimental system developed at the Center for Fast Ultrasound Imaging using a 2D transducer array. The target array is a fully populated 32 × 32 3 MHz array with a half wavelength pitch. The...

  18. Hybrid segmentation framework for 3D medical image analysis

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  19. The European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) report on 3D CT-based in-room image guidance systems: A practical and technical review and guide

    The past decade has provided many technological advances in radiotherapy. The European Institute of Radiotherapy (EIR) was established by the European Society of Therapeutic Radiology and Oncology (ESTRO) to provide current consensus statement with evidence-based and pragmatic guidelines on topics of practical relevance for radiation oncology. This report focuses primarily on 3D CT-based in-room image guidance (3DCT-IGRT) systems. It will provide an overview and current standing of 3DCT-IGRT systems addressing the rationale, objectives, principles, applications, and process pathways, both clinical and technical for treatment delivery and quality assurance. These are reviewed for four categories of solutions; kV CT and kV CBCT (cone-beam CT) as well as MV CT and MV CBCT. It will also provide a framework and checklist to consider the capability and functionality of these systems as well as the resources needed for implementation. Two different but typical clinical cases (tonsillar and prostate cancer) using 3DCT-IGRT are illustrated with workflow processes via feedback questionnaires from several large clinical centres currently utilizing these systems. The feedback from these clinical centres demonstrates a wide variability based on local practices. This report whilst comprehensive is not exhaustive as this area of development remains a very active field for research and development. However, it should serve as a practical guide and framework for all professional groups within the field, focussed on clinicians, physicists and radiation therapy technologists interested in IGRT.

  20. Impulse Turbine with 3D Guide Vanes for Wave Energy Conversion

    Manabu TAKAO; Toshiaki SETOGUCHI; Kenji KANEKO; Shuichi NAGATA

    2006-01-01

    In this study, in order to achieve further improvement of the performance of an impulse turbine with fixed guide vanes for wave energy conversion, the effect of guide vane shape on the performance was investigated by experiment. The investigation was performed by model testing under steady flow condition. As a result, it was found that the efficiency of the turbine with 3D guide vanes are slightly superior to that of the turbine with 2D guide vanes because of the increase of torque by means of 3D guide vane, though pressure drop across the turbine for the 3D case is slightly higher than that for the 2D case.

  1. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  2. 3D Tongue Motion from Tagged and Cine MR Images

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z.; Lee, Junghoon; Stone, Maureen; Prince, Jerry L.

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach su ers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information...

  3. A compact mechatronic system for 3D ultrasound guided prostate interventions

    Purpose: Ultrasound imaging has improved the treatment of prostate cancer by producing increasingly higher quality images and influencing sophisticated targeting procedures for the insertion of radioactive seeds during brachytherapy. However, it is critical that the needles be placed accurately within the prostate to deliver the therapy to the planned location and avoid complications of damaging surrounding tissues. Methods: The authors have developed a compact mechatronic system, as well as an effective method for guiding and controlling the insertion of transperineal needles into the prostate. This system has been designed to allow guidance of a needle obliquely in 3D space into the prostate, thereby reducing pubic arch interference. The choice of needle trajectory and location in the prostate can be adjusted manually or with computer control. Results: To validate the system, a series of experiments were performed on phantoms. The 3D scan of the string phantom produced minimal geometric error, which was less than 0.4 mm. Needle guidance accuracy tests in agar prostate phantoms showed that the mean error of bead placement was less then 1.6 mm along parallel needle paths that were within 1.2 mm of the intended target and 1 deg. from the preplanned trajectory. At oblique angles of up to 15 deg. relative to the probe axis, beads were placed to within 3.0 mm along a trajectory that were within 2.0 mm of the target with an angular error less than 2 deg. Conclusions: By combining 3D TRUS imaging system to a needle tracking linkage, this system should improve the physician's ability to target and accurately guide a needle to selected targets without the need for the computer to directly manipulate and insert the needle. This would be beneficial as the physician has complete control of the system and can safely maneuver the needle guide around obstacles such as previously placed needles.

  4. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  5. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  6. 3D interfractional patient position verification using 2D-3D registration of orthogonal images

    Reproducible positioning of the patient during fractionated external beam radiation therapy is imperative to ensure that the delivered dose distribution matches the planned one. In this paper, we expand on a 2D-3D image registration method to verify a patient's setup in three dimensions (rotations and translations) using orthogonal portal images and megavoltage digitally reconstructed radiographs (MDRRs) derived from CT data. The accuracy of 2D-3D registration was improved by employing additional image preprocessing steps and a parabolic fit to interpolate the parameter space of the cost function utilized for registration. Using a humanoid phantom, precision for registration of three-dimensional translations was found to be better than 0.5 mm (1 s.d.) for any axis when no rotations were present. Three-dimensional rotations about any axis were registered with a precision of better than 0.2 deg. (1 s.d.) when no translations were present. Combined rotations and translations of up to 4 deg. and 15 mm were registered with 0.4 deg. and 0.7 mm accuracy for each axis. The influence of setup translations on registration of rotations and vice versa was also investigated and mostly agrees with a simple geometric model. Additionally, the dependence of registration accuracy on three cost functions, angular spacing between MDRRs, pixel size, and field-of-view, was examined. Best results were achieved by mutual information using 0.5 deg. angular spacing and a 10x10 cm2 field-of-view with 140x140 pixels. Approximating patient motion as rigid transformation, the registration method is applied to two treatment plans and the patients' setup errors are determined. Their magnitude was found to be ≤6.1 mm and ≤2.7 deg. for any axis in all of the six fractions measured for each treatment plan

  7. Automated curved planar reformation of 3D spine images

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  8. DICOM for quantitative imaging research in 3D Slicer

    Fedorov, Andrey; Kikinis, Ron

    2014-01-01

    These are the slides presented by Andrey Fedorov at the 3D Slicer workshop and meeting of the Quantitative Image Informatics for Cancer Research (QIICR) project that took place November 18-19, 2014, at the University of Iowa.

  9. Practical pseudo-3D registration for large tomographic images

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  10. 3D wavefront image formation for NIITEK GPR

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  11. Extracting 3D Layout From a Single Image Using Global Image Structures

    Z. Lou; T. Gevers; N. Hu

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  12. Holoscopic 3D image depth estimation and segmentation techniques

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  13. Efficient reconfigurable architectures for 3D medical image compression

    Afandi, Ahmad

    2010-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In thes...

  14. Irrlicht 17 Realtime 3D Engine Beginner's Guide

    Stein, Johannes

    2011-01-01

    A beginner's guide with plenty of screenshots and explained code. If you have C++ skills and are interested in learning Irrlicht, this book is for you. Absolutely no knowledge of Irrlicht is necessary for you to follow this book!

  15. An automated 3D reconstruction method of UAV images

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  16. 1024 pixels single photon imaging array for 3D ranging

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  17. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  18. 3D acoustic imaging applied to the Baikal neutrino telescope

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  19. 3D acoustic imaging applied to the Baikal neutrino telescope

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  20. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  1. 3D Image Reconstruction from Compton camera data

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  2. Subduction zone guided waves: 3D modelling and attenuation effects

    Garth, T.; Rietbrock, A.

    2013-12-01

    Waveform modelling is an important tool for understanding complex seismic structures such as subduction zone waveguides. These structures are often simplified to 2D structures for modelling purposes to reduce computational costs. In the case of subduction zone waveguide affects, 2D models have shown that dispersed arrivals are caused by a low velocity waveguide, inferred to be subducted oceanic crust and/or hydrated outer rise normal faults. However, due to the 2D modelling limitations the inferred seismic properties such as velocity contrast and waveguide thickness are still debated. Here we test these limitations with full 3D waveform modelling. For waveguide effects to be observable the waveform must be accurately modelled to relatively high frequencies (> 2 Hz). This requires a small grid spacing due to the high seismic velocities present in subduction zones. A large area must be modelled as well due to the long propagation distances (400 - 600 km) of waves interacting with subduction zone waveguides. The combination of the large model area and small grid spacing required means that these simulations require a large amount of computational resources, only available at high performance computational centres like the UK National super computer HECTOR (used in this study). To minimize the cost of modelling for such a large area, the width of the model area perpendicular to the subduction trench (the y-direction) is made as small as possible. This reduces the overall volume of the 3D model domain. Therefore the wave field is simulated in a model ';corridor' of the subduction zone velocity structure. This introduces new potential sources of error particularly from grazing wave side reflections in the y-direction. Various dampening methods are explored to reduce these grazing side reflections, including perfectly matched layers (PML) and more traditional exponential dampening layers. Defining a corridor model allows waveguide affects to be modelled up to at least 2

  3. 3D transrectal ultrasound prostate biopsy using a mechanical imaging and needle-guidance system

    Bax, Jeffrey; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Gil, Elena; Bluvol, Jeremy; Knight, Kerry; Smith, David; Romagnoli, Cesare; Fenster, Aaron

    2008-03-01

    Prostate biopsy procedures are generally limited to 2D transrectal ultrasound (TRUS) imaging for biopsy needle guidance. This limitation results in needle position ambiguity and an insufficient record of biopsy core locations in cases of prostate re-biopsy. We have developed a multi-jointed mechanical device that supports a commercially available TRUS probe with an integrated needle guide for precision prostate biopsy. The device is fixed at the base, allowing the joints to be manually manipulated while fully supporting its weight throughout its full range of motion. Means are provided to track the needle trajectory and display this trajectory on a corresponding TRUS image. This allows the physician to aim the needle-guide at predefined targets within the prostate, providing true 3D navigation. The tracker has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe to generate 3D images. The tracker reduces the variability associated with conventional hand-held probes, while preserving user familiarity and procedural workflow. In a prostate phantom, biopsy needles were guided to within 2 mm of their targets, and the 3D location of the biopsy core was accurate to within 3 mm. The 3D navigation system is validated in the presence of prostate motion in a preliminary patient study.

  4. Combining different modalities for 3D imaging of biological objects

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  5. 3D CT Imaging Method for Measuring Temporal Bone Aeration

    Objective: 3D volume reconstruction of CT images can be used to measure temporal bene aeration. This study evaluates the technique with respect to reproducibility and acquisition parameters. Material and methods: Helical CT images acquired from patients with radiographically normal temporal bones using standard clinical protocols were retrospectively analyzed. 3D image reconstruction was performed to measure the volume of air within the temporal bone. The appropriate threshold values for air were determined from reconstruction of a phantom with a known air volume imaged using the same clinical protocols. The appropriate air threshold values were applied to the clinical material. Results: Air volume was measured according to an acquisition algorithm. The average volume in the temporal bone CT group was 5.56 ml, compared to 5.19 ml in the head CT group (p = 0.59). The correlation coefficient between examiners was > 0.92. There was a wide range of aeration volumes among individual ears (0.76-18.84 ml); however, paired temporal bones differed by an average of just 1.11 ml. Conclusions: The method of volume measurement from 3D reconstruction reported here is widely available, easy to perform and produces consistent results among examiners. Application of the technique to archival CT data is possible using corrections for air segmentation thresholds according to acquisition parameters

  6. Combining Different Modalities for 3D Imaging of Biological Objects

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  7. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed met...

  8. 3D Imaging of a Cavity Vacuum under Dissipation

    Lee, Moonjoo; Seo, Wontaek; Hong, Hyun-Gue; Song, Younghoon; Dasari, Ramachandra R; An, Kyungwon

    2013-01-01

    P. A. M. Dirac first introduced zero-point electromagnetic fields in order to explain the origin of atomic spontaneous emission. Since then, it has long been debated how the zero-point vacuum field is affected by dissipation. Here we report 3D imaging of vacuum fluctuations in a high-Q cavity and rms amplitude measurements of the vacuum field. The 3D imaging was done by the position-dependent emission of single atoms, resulting in dissipation-free rms amplitude of 0.97 +- 0.03 V/cm. The actual rms amplitude of the vacuum field at the antinode was independently determined from the onset of single-atom lasing at 0.86 +- 0.08 V/cm. Within our experimental accuracy and precision, the difference was noticeable, but it is not significant enough to disprove zero-point energy conservation.

  9. 3D printed guides for controlled alignment in biomechanics tests.

    Verstraete, Matthias A; Willemot, Laurent; Van Onsem, Stefaan; Stevens, Cyriëlle; Arnout, Nele; Victor, Jan

    2016-02-01

    The bone-machine interface is a vital first step for biomechanical testing. It remains challenging to restore the original alignment of the specimen with respect to the test setup. To overcome this issue, we developed a methodology based on virtual planning and 3D printing. In this paper, the methodology is outlined and a proof of concept is presented based on a series of cadaveric tests performed on our knee simulator. The tests described in this paper reached an accuracy within 3-4° and 3-4mm with respect to the virtual planning. It is however the authors' belief that the method has the potential to achieve an accuracy within one degree and one millimeter. Therefore, this approach can aid in reducing the imprecisions in biomechanical tests (e.g. knee simulator tests for evaluating knee kinematics) and improve the consistency of the bone-machine interface. PMID:26810696

  10. Automated Recognition of 3D Features in GPIR Images

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  11. 3D imaging of semiconductor components by discrete laminography

    Batenburg, Joost; Palenstijn, W.J.; Sijbers, J.

    2014-01-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the ...

  12. Improvements in quality and quantification of 3D PET images

    Rapisarda,

    2012-01-01

    The spatial resolution of Positron Emission Tomography is conditioned by several physical factors, which can be taken into account by using a global Point Spread Function (PSF). In this thesis a spatially variant (radially asymmetric) PSF implementation in the image space of a 3D Ordered Subsets Expectation Maximization (OSEM) algorithm is proposed. Two different scanners were considered, without and with Time Of Flight (TOF) capability. The PSF was derived by fitting some experimental...

  13. Super pipe lining system for 3-D CT imaging

    A new idea for 3-D CT image reconstruction system is introduced. For the network has very important improvement in recently years, it realizes that network computing replace the traditional serial system processing. CT system's works are carried in a multi-level fashion, it will make the tedious works processed by many computers linked by local network in the same time. So greatly improve the reconstruction speed

  14. 3D VSP imaging in the Deepwater GOM

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  15. 3D stereotaxis for epileptic foci through integrating MR imaging with neurological electrophysiology data

    Objective: To improve the accuracy of the epilepsy diagnoses by integrating MR image from PACS with data from neurological electrophysiology. The integration is also very important for transmiting diagnostic information to 3D TPS of radiotherapy. Methods: The electroencephalogram was redisplayed by EEG workstation, while MR image was reconstructed by Brainvoyager software. 3D model of patient brain was built up by combining reconstructed images with electroencephalogram data in Base 2000. 30 epileptic patients (18 males and 12 females) with their age ranged from 12 to 54 years were confirmed by using the integrated MR images and the data from neurological electrophysiology and their 3D stereolocating. Results: The corresponding data in 3D model could show the real situation of patients' brain and visually locate the precise position of the focus. The suddessful rate of 3D guided operation was greatly improved, and the number of epileptic onset was markedly decreased. The epilepsy was stopped for 6 months in 8 of the 30 patients. Conclusion: The integration of MR image and information of neurological electrophysiology can improve the diagnostic level for epilepsy, and it is crucial for imp roving the successful rate of manipulations and the epilepsy analysis. (authors)

  16. Discrete Method of Images for 3D Radio Propagation Modeling

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  17. 3D reconstruction of multiple stained histology images

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  18. 3D tongue motion from tagged and cine MR images.

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  19. Effects of point configuration on the accuracy in 3D reconstruction from biplane images

    Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the 'cloud' of points) on reconstruction errors for one of these techniques developed in our laboratory. Five types of configurations (a ball, an elongated ellipsoid (cigar), flattened ball (pancake), flattened cigar, and a flattened ball with a single distant point) are used in the evaluations. For each shape, 100 random configurations were generated, with point coordinates chosen from Gaussian distributions having a covariance matrix corresponding to the desired shape. The 3D data were projected into the image planes using a known imaging geometry. Gaussian distributed errors were introduced in the x and y coordinates of these projected points. Gaussian distributed errors were also introduced into the gantry information used to calculate the initial imaging geometry. The imaging geometries and 3D positions were iteratively refined using the enhanced-Metz-Fencil technique. The image data were also used to evaluate the feasible R-t solution volume. The 3D errors between the calculated and true positions were determined. The effects of the shape of the configuration, the number of points, the initial geometry error, and the input image error were evaluated. The results for the number of points, initial geometry error, and image error are in agreement with previously reported results, i.e., increasing the number of points and reducing initial geometry and/or image error, improves the accuracy of the reconstructed data. The shape of the 3D configuration of points also affects the error of reconstructed 3D configuration; specifically, errors decrease as the 'volume' of the 3D configuration

  20. Preliminary Investigation: 2D-3D Registration of MR and X-ray Cardiac Images Using Catheter Constraints

    Truong, Michael V.N.; Aslam, Abdullah; Rinaldi, Christopher Aldo; Razavi, Reza; Penney, Graeme P.; Rhode, Kawal

    2009-01-01

    Cardiac catheterization procedures are routinely guided by X-ray fluoroscopy but suffer from poor soft-tissue contrast and a lack of depth information. These procedures often employ pre-operative magnetic resonance or computed tomography imaging for treatment planning due to their excellent soft-tissue contrast and 3D imaging capabilities. We developed a 2D-3D image registration method to consolidate the advantages of both modalities by overlaying the 3D images onto the X-ray. Our method uses...

  1. Automatic structural matching of 3D image data

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  2. Thoracic Pedicle Screw Placement Guide Plate Produced by Three-Dimensional (3-D) Laser Printing.

    Chen, Hongliang; Guo, Kaijing; Yang, Huilin; Wu, Dongying; Yuan, Feng

    2016-01-01

    BACKGROUND The aim of this study was to evaluate the accuracy and feasibility of an individualized thoracic pedicle screw placement guide plate produced by 3-D laser printing. MATERIAL AND METHODS Thoracic pedicle samples of 3 adult cadavers were randomly assigned for 3-D CT scans. The 3-D thoracic models were established by using medical Mimics software, and a screw path was designed with scanned data. Then the individualized thoracic pedicle screw placement guide plate models, matched to the backside of thoracic vertebral plates, were produced with a 3-D laser printer. Screws were placed with assistance of a guide plate. Then, the placement was assessed. RESULTS With the data provided by CT scans, 27 individualized guide plates were produced by 3-D printing. There was no significant difference in sex and relevant parameters of left and right sides among individuals (P>0.05). Screws were placed with assistance of guide plates, and all screws were in the correct positions without penetration of pedicles, under direct observation and anatomic evaluation post-operatively. CONCLUSIONS A thoracic pedicle screw placement guide plate can be produced by 3-D printing. With a high accuracy in placement and convenient operation, it provides a new method for accurate placement of thoracic pedicle screws. PMID:27194139

  3. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  4. Towards magnetic 3D x-ray imaging

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  5. Large Scale 3D Image Reconstruction in Optical Interferometry

    Schutz, Antony; Mary, David; Thiébaut, Eric; Soulez, Ferréol

    2015-01-01

    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

  6. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  7. Autonomous Planetary 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    A common task for many deep space missions is autonomous generation of 3-D representations of planetary surfaces onboard unmanned spacecrafts. The basic problem for this class of missions is, that the closed loop time is far too long. The closed loop time is defined as the time from when a human...... of seconds to a few minutes, the closed loop time effectively precludes active human control.The only way to circumvent this problem is to build an artificial feature extractor operating autonomously onboard the spacecraft.Different artificial feature extractors are presented and their efficiency...... is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  8. Fully automatic plaque segmentation in 3-D carotid ultrasound images.

    Cheng, Jieyu; Li, He; Xiao, Feng; Fenster, Aaron; Zhang, Xuming; He, Xiaoling; Li, Ling; Ding, Mingyue

    2013-12-01

    Automatic segmentation of the carotid plaques from ultrasound images has been shown to be an important task for monitoring progression and regression of carotid atherosclerosis. Considering the complex structure and heterogeneity of plaques, a fully automatic segmentation method based on media-adventitia and lumen-intima boundary priors is proposed. This method combines image intensity with structure information in both initialization and a level-set evolution process. Algorithm accuracy was examined on the common carotid artery part of 26 3-D carotid ultrasound images (34 plaques ranging in volume from 2.5 to 456 mm(3)) by comparing the results of our algorithm with manual segmentations of two experts. Evaluation results indicated that the algorithm yielded total plaque volume (TPV) differences of -5.3 ± 12.7 and -8.5 ± 13.8 mm(3) and absolute TPV differences of 9.9 ± 9.5 and 11.8 ± 11.1 mm(3). Moreover, high correlation coefficients in generating TPV (0.993 and 0.992) between algorithm results and both sets of manual results were obtained. The automatic method provides a reliable way to segment carotid plaque in 3-D ultrasound images and can be used in clinical practice to estimate plaque measurements for management of carotid atherosclerosis. PMID:24063959

  9. 3D-imaging using micro-PIXE

    Ishii, K.; Matsuyama, S.; Watanabe, Y.; Kawamura, Y.; Yamaguchi, T.; Oyama, R.; Momose, G.; Ishizaki, A.; Yamazaki, H.; Kikuchi, Y.

    2007-02-01

    We have developed a 3D-imaging system using characteristic X-rays produced by proton micro-beam bombardment. The 3D-imaging system consists of a micro-beam and an X-ray CCD camera of 1 mega pixels (Hamamatsu photonics C8800X), and has a spatial resolution of 4 μm by using characteristic Ti-K-X-rays (4.558 keV) produced by 3 MeV protons of beam spot size of ˜1 μm. We applied this system, namely, a micron-CT to observe the inside of a living small ant's head of ˜1 mm diameter. An ant was inserted into a small polyimide tube the inside diameter and the wall thickness of which are 1000 and 25 μm, respectively, and scanned by the micron-CT. Three dimensional images of the ant's heads were obtained with a spatial resolution of 4 μm. It was found that, in accordance with the strong dependence on atomic number of photo ionization cross-sections, the mandibular gland of ant contains heavier elements, and moreover, the CT-image of living ant anaesthetized by chloroform is quite different from that of a dead ant dipped in formalin.

  10. Lymph node imaging by ultrarapid 3D angiography

    Purpose: A report on observations of lymph node images obtained by gadolinium-enhanced 3D MR angiography (MRA). Methods: Ultrarapid MRA (TR, TE, FA - 5 or 6.4 ms, 1.9 or 2.8 ms, 30-40 degrees) with 0.2 mmol/kg BW Gd-DTPA and 20 ml physiological saline. Start after completion of injection. Single series of the pelvis-thigh as well as head-neck regions by use of a phased array coil with a 1.5 T Magnetom Vision or a 1.0 T Magnetom Harmony (Siemens, Erlangen). We report on lymph node imaging in 4 patients, 2 of whom exhibited benign changes and 2 further metastases. In 1 patient with extensive lymph node metastases of a malignant melanoma, color-Doppler sonography as color-flow angiography (CFA) was used as a comparative method. Results: Lymph node imaging by contrast medium-enhanced ultrarapid 3D MRA apparently resulted from their vessels. Thus, arterially-supplied metastases and inflammatory enlarged lymph nodes were well visualized while those with a.v. shunts or poor vascular supply in tumor necroses were poorly imaged. Conclusions: Further investigations are required with regard to the visualization of lymph nodes in other parts of the body as well as a possible differentiation between benign and malignant lesions. (orig.)

  11. Ice shelf melt rates and 3D imaging

    Lewis, Cameron Scott

    Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf's grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3D image of an ice shelf. The 3D images can then be used to obtain a more complete estimate of the bottom melt rates of ice shelves.

  12. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  13. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  14. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenge...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....... arise when using data from multiple frequencies for imaging of biological targets. In this paper, the performance of a multi-frequency algorithm, in which measurement data from several different frequencies are used at once, is compared with a stepped-frequency algorithm, in which images reconstructed...

  15. Development of 3D microwave imaging reflectometry in LHD (invited).

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  16. Effective classification of 3D image data using partitioning methods

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  17. Ultra-realistic 3-D imaging based on colour holography

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  18. Image-Based 3D Face Modeling System

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  19. Extracting 3D layout from a single image using global image structures.

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  20. 3D imaging of neutron tracks using confocal microscopy

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  1. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  2. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  3. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald, E-mail: theobold.fuchs@iis.fraunhofer.de; Schön, Tobias, E-mail: theobold.fuchs@iis.fraunhofer.de; Sukowski, Frank [Fraunhofer Development Center X-ray Technology EZRT, Flugplatzstr. 75, 90768 Fürth (Germany); Dittmann, Jonas; Hanke, Randolf [Chair of X-ray Microscopy, Institute of Physics and Astronomy, Julius-Maximilian-University Würzburg, Josef-Martin-Weg 63, 97074 Würzburg (Germany)

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  4. 3D Reconstruction of virtual colon structures from colonoscopy images.

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  5. 3D electrical tomographic imaging using vertical arrays of electrodes

    Murphy, S. C.; Stanley, S. J.; Rhodes, D.; York, T. A.

    2006-11-01

    Linear arrays of electrodes in conjunction with electrical impedance tomography have been used to spatially interrogate industrial processes that have only limited access for sensor placement. This paper explores the compromises that are to be expected when using a small number of vertically positioned linear arrays to facilitate 3D imaging using electrical tomography. A configuration with three arrays is found to give reasonable results when compared with a 'conventional' arrangement of circumferential electrodes. A single array yields highly localized sensitivity that struggles to image the whole space. Strategies have been tested on a small-scale version of a sludge settling application that is of relevance to the industrial sponsor. A new electrode excitation strategy, referred to here as 'planar cross drive', is found to give superior results to an extended version of the adjacent electrodes technique due to the improved uniformity of the sensitivity across the domain. Recommendations are suggested for parameters to inform the scale-up to industrial vessels.

  6. Mono- and multistatic polarimetric sparse aperture 3D SAR imaging

    DeGraaf, Stuart; Twigg, Charles; Phillips, Louis

    2008-04-01

    SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.

  7. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    Li, X. W.; Kim, D. H.; Cho, S. J.; Kim, S. T.

    2013-01-01

    A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII) and linear-complemented maximum- length cellular automata (LC-MLCA) to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA) recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-...

  8. Fast 3-d tomographic microwave imaging for breast cancer detection.

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  9. Fast 3D subsurface imaging with stepped-frequency GPR

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  10. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-06-01

    Full Text Available Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.

  11. Mechanically assisted 3D ultrasound for pre-operative assessment and guiding percutaneous treatment of focal liver tumors

    Sadeghi Neshat, Hamid; Bax, Jeffery; Barker, Kevin; Gardi, Lori; Chedalavada, Jason; Kakani, Nirmal; Fenster, Aaron

    2014-03-01

    Image-guided percutaneous ablation is the standard treatment for focal liver tumors deemed inoperable and is commonly used to maintain eligibility for patients on transplant waitlists. Radiofrequency (RFA), microwave (MWA) and cryoablation technologies are all delivered via one or a number of needle-shaped probes inserted directly into the tumor. Planning is mostly based on contrast CT/MRI. While intra-procedural CT is commonly used to confirm the intended probe placement, 2D ultrasound (US) remains the main, and in some centers the only imaging modality used for needle guidance. Corresponding intraoperative 2D US with planning and other intra-procedural imaging modalities is essential for accurate needle placement. However, identification of matching features of interest among these images is often challenging given the limited field-of-view (FOV) and low quality of 2D US images. We have developed a passive tracking arm with a motorized scan-head and software tools to improve guiding capabilities of conventional US by large FOV 3D US scans that provides more anatomical landmarks that can facilitate registration of US with both planning and intra-procedural images. The tracker arm is used to scan the whole liver with a high geometrical accuracy that facilitates multi-modality landmark based image registration. Software tools are provided to assist with the segmentation of the ablation probes and tumors, find the 2D view that best shows the probe(s) from a 3D US image, and to identify the corresponding image from planning CT scans. In this paper, evaluation results from laboratory testing and a phase 1 clinical trial for planning and guiding RFA and MWA procedures using the developed system will be presented. Early clinical results show a comparable performance to intra-procedural CT that suggests 3D US as a cost-effective alternative with no side-effects in centers where CT is not available.

  12. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  13. 3D image fusion and guidance for computer-assisted bronchoscopy

    Higgins, W. E.; Rai, L.; Merritt, S. A.; Lu, K.; Linger, N. T.; Yu, K. C.

    2005-11-01

    The standard procedure for diagnosing lung cancer involves two stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.

  14. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  15. A beginner's guide to 3D printing 14 simple toy designs to get you started

    Rigsby, Mike

    2014-01-01

    A Beginner''s Guide to 3D Printing is the perfect resource for those who would like to experiment with 3D design and manufacturing, but have little or no technical experience with the standard software. Author Mike Rigsby leads readers step-by-step through 15 simple toy projects, each illustrated with screen caps of Autodesk 123D Design, the most common free 3D software available. The projects are later described using Sketchup, another free popular software package. Beginning with basics projects that will take longer to print than design, readers are then given instruction on more advanced t

  16. 3D imaging of semiconductor components by discrete laminography

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach

  17. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  18. 3-D MR imaging of ectopia vasa deferentia

    Goenka, Ajit Harishkumar; Parihar, Mohan; Sharma, Raju; Gupta, Arun Kumar [All India Institute of Medical Sciences (AIIMS), Department of Radiology, New Delhi (India); Bhatnagar, Veereshwar [All India Institute of Medical Sciences (AIIMS), Department of Paediatric Surgery, New Delhi (India)

    2009-11-15

    Ectopia vasa deferentia is a complex anomaly characterized by abnormal termination of the urethral end of the vas deferens into the urinary tract due to an incompletely understood developmental error of the distal Wolffian duct. Associated anomalies of the lower gastrointestinal tract and upper urinary tract are also commonly present due to closely related embryological development. Although around 32 cases have been reported in the literature, the MR appearance of this condition has not been previously described. We report a child with high anorectal malformation who was found to have ectopia vasa deferentia, crossed fused renal ectopia and type II caudal regression syndrome on MR examination. In addition to the salient features of this entity on reconstructed MR images, the important role of 3-D MRI in establishing an unequivocal diagnosis and its potential in facilitating individually tailored management is also highlighted. (orig.)

  19. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  20. C-arm CT-guided 3D navigation of percutaneous interventions; C-Bogen-CT-unterstuetzte 3D-Navigation perkutaner Interventionen

    Becker, H.C.; Meissner, O.; Waggershauser, T. [Klinikum der Ludwig-Maximilians-Universitaet Muenchen, Campus Grosshadern, Institut fuer Klinische Radiologie, Muenchen (Germany)

    2009-09-15

    So far C-arm CT images were predominantly used for a precise guidance of an endovascular or intra-arterial therapy. A novel combined 3D-navigation C-arm system now also allows cross-sectional and fluoroscopy controlled interventions. Studies have reported about successful CT-image guided navigation with C-arm systems in vertebroplasty. Insertion of the radiofrequency ablation probe is also conceivable for lung and liver tumors that had been labelled with lipiodol. In the future C-arm CT based navigation systems will probably allow simplified and safer complex interventions and simultaneously reduce radiation exposure. (orig.) [German] Bisher wurden CT-Aufnahmen von einem rotierenden C-Bogen-System v. a. fuer die gezielte Unterstuetzung endovaskulaerer und intraarterieller Interventionen verwendet. Mit einer neuen kombinierten 3D-C-Bogen-Navigationseinheit ist es jetzt aber auch moeglich, perkutane Interventionen mit einem C-Bogen-System unter Schichtbild- und fluoroskopischer Kontrolle durchzufuehren. In Studien wird ueber erfolgreiche CT-Bild-gefuehrte Navigationen bei Vertebroplastien mit einem C-Bogen-System berichtet. Vorstellbar ist aber auch das Einbringen von Radiofrequenzsonden in Tumoren von Lunge und Leber, die bereits intraarteriell mit Lipiodol markiert wurden. Voraussichtlich koennen C-Bogen-CT-basierte Navigationssysteme in Zukunft komplexe Interventionen einfacher und sicherer machen und dabei gleichzeitig die Strahlenexposition reduzieren. (orig.)

  1. Interventional spinal procedures guided and controlled by a 3D rotational angiographic unit

    Pedicelli, Alessandro; Verdolotti, Tommaso; Desiderio, Flora; D' Argento, Francesco; Colosimo, Cesare; Bonomo, Lorenzo [Catholic University of Rome, A. Gemelli Hospital, Department of Bioimaging and Radiological Sciences, Rome (Italy); Pompucci, Angelo [Catholic University of Rome, A. Gemelli Hospital, Department of Neurotraumatology, Rome (Italy)

    2011-12-15

    The aim of this paper is to demonstrate the usefulness of 2D multiplanar reformatting images (MPR) obtained from rotational acquisitions with cone-beam computed tomography technology during percutaneous extra-vascular spinal procedures performed in the angiography suite. We used a 3D rotational angiographic unit with a flat panel detector. MPR images were obtained from a rotational acquisition of 8 s (240 images at 30 fps), tube rotation of 180 and after post-processing of 5 s by a local work-station. Multislice CT (MSCT) is the best guidance system for spinal approaches permitting direct tomographic visualization of each spinal structure. Many operators, however, are trained with fluoroscopy, it is less expensive, allows real-time guidance, and in many centers the angiography suite is more frequently available for percutaneous procedures. We present our 6-year experience in fluoroscopy-guided spinal procedures, which were performed under different conditions using MPR images. We illustrate cases of vertebroplasty, epidural injections, selective foraminal nerve root block, facet block, percutaneous treatment of disc herniation and spine biopsy, all performed with the help of MPR images for guidance and control in the event of difficult or anatomically complex access. The integrated use of ''CT-like'' MPR images allows the execution of spinal procedures under fluoroscopy guidance alone in all cases of dorso-lumbar access, with evident limitation of risks and complications, and without need for recourse to MSCT guidance, thus eliminating CT-room time (often bearing high diagnostic charges), and avoiding organizational problems for procedures that need, for example, combined use of a C-arm in the CT room. (orig.)

  2. GPU-accelerated denoising of 3D magnetic resonance images

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  3. Spectral ladar: towards active 3D multispectral imaging

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  4. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    X. W. Li

    2013-08-01

    Full Text Available A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII and linear-complemented maximum- length cellular automata (LC-MLCA to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-MLCA algorithm. When decrypting the encrypted image, the 2-D EIA is recovered by the LC-MLCA. Using the computational integral imaging reconstruction (CIIR technique and a 3-D object is subsequently reconstructed on the output plane from the 2-D recovered EIA. Because the 2-D EIA is composed of a number of elemental images having their own perspectives of a 3-D image, even if the encrypted image is seriously harmed, the 3-D image can be successfully reconstructed only with partial data. To verify the usefulness of the proposed algorithm, we perform computational experiments and present the experimental results for various attacks. The experiments demonstrate that the proposed encryption method is valid and exhibits strong robustness and security.

  5. Development and evaluation of a semiautomatic 3D segmentation technique of the carotid arteries from 3D ultrasound images

    Gill, Jeremy D.; Ladak, Hanif M.; Steinman, David A.; Fenster, Aaron

    1999-05-01

    In this paper, we report on a semi-automatic approach to segmentation of carotid arteries from 3D ultrasound (US) images. Our method uses a deformable model which first is rapidly inflated to approximately find the boundary of the artery, then is further deformed using image-based forces to better localize the boundary. An operator is required to initialize the model by selecting a position in the 3D US image, which is within the carotid vessel. Since the choice of position is user-defined, and therefore arbitrary, there is an inherent variability in the position and shape of the final segmented boundary. We have assessed the performance of our segmentation method by examining the local variability in boundary shape as the initial selected position is varied in a freehand 3D US image of a human carotid bifurcation. Our results indicate that high variability in boundary position occurs in regions where either the segmented boundary is highly curved, or the 3D US image has poorly defined vessel edges.

  6. High resolution 3D imaging of synchrotron generated microbeams

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  7. High resolution 3D imaging of synchrotron generated microbeams

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery

  8. 3D Slicer as an image computing platform for the Quantitative Imaging Network.

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

    2012-11-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  9. Preparing diagnostic 3D images for image registration with planning CT images

    Purpose: Pre-radiotherapy (pre-RT) tomographic images acquired for diagnostic purposes often contain important tumor and/or normal tissue information which is poorly defined or absent in planning CT images. Our two years of clinical experience has shown that computer-assisted 3D registration of pre-RT images with planning CT images often plays an indispensable role in accurate treatment volume definition. Often the only available format of the diagnostic images is film from which the original 3D digital data must be reconstructed. In addition, any digital data, whether reconstructed or not, must be put into a form suitable for incorporation into the treatment planning system. The purpose of this investigation was to identify all problems that must be overcome before this data is suitable for clinical use. Materials and Methods: In the past two years we have 3D-reconstructed 300 diagnostic images from film and digital sources. As a problem was discovered we built a software tool to correct it. In time we collected a large set of such tools and found that they must be applied in a specific order to achieve the correct reconstruction. Finally, a toolkit (ediScan) was built that made all these tools available in the proper manner via a pleasant yet efficient mouse-based user interface. Results: Problems we discovered included different magnifications, shifted display centers, non-parallel image planes, image planes not perpendicular to the long axis of the table-top (shearing), irregularly spaced scans, non contiguous scan volumes, multiple slices per film, different orientations for slice axes (e.g. left-right reversal), slices printed at window settings corresponding to tissues of interest for diagnostic purposes, and printing artifacts. We have learned that the specific steps to correct these problems, in order of application, are: Also, we found that fast feedback and large image capacity (at least 2000 x 2000 12-bit pixels) are essential for practical application

  10. ROIC for gated 3D imaging LADAR receiver

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  11. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  12. Fast 3D T1-weighted brain imaging at 3 Tesla with modified 3D FLASH sequence

    Longitudinal relaxation times (T1) of white and gray matter become close at high magnetic field. Therefore, classical T1 sensitive methods, like spoiled FLASH fail to give a sufficient contrast in human brain imaging at 3 Tesla. An excellent T1 contrast can be achieved at high field by gradient echo imaging with a preparatory inversion pulse. The inversion recovery (IR) preparation can be combined with a fast 2D gradient echo scans. In this paper we present an application of this technique to rapid 3-dimensional imaging. New technique called 3D SIR FLASH was implemented on Burker MSLX system equipped with a 3T, 90 cm horizontal bore magnet working in Centre Hospitalier in Rouffach, France. The new technique was used for comparison of MRI images of healthy volunteers obtained with a traditional 3D imaging. White and gray matter are clearly distinguishable when 3D SIR FLASH is used. The total acquisition time for 128x128x128 image was 5 minutes. Three dimensional visualization with facet representation of surfaces and oblique sections was done off-line on the INDIGO Extreme workstation. New technique is widely used in FORENAP, Centre Hospitalier in Reuffach, Alsace. (author)

  13. Multimodal Registration and Fusion for 3D Thermal Imaging

    Moulay A. Akhloufi; Benjamin Verney

    2015-01-01

    3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind ...

  14. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  15. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  16. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  17. Computational ghost imaging versus imaging laser radar for 3D imaging

    Hardy, Nicholas D

    2012-01-01

    Ghost imaging has been receiving increasing interest for possible use as a remote-sensing system. There has been little comparison, however, between ghost imaging and the imaging laser radars with which it would be competing. Toward that end, this paper presents a performance comparison between a pulsed, computational ghost imager and a pulsed, floodlight-illumination imaging laser radar. Both are considered for range-resolving (3D) imaging of a collection of rough-surfaced objects at standoff ranges in the presence of atmospheric turbulence. Their spatial resolutions and signal-to-noise ratios are evaluated as functions of the system parameters, and these results are used to assess each system's performance trade-offs. Scenarios in which a reflective ghost-imaging system has advantages over a laser radar are identified.

  18. 3D imaging of nanomaterials by discrete tomography

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi2 nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  19. Orthodontic treatment plan changed by 3D images

    Clinical application of CBCT is most often enforced in dental phenomenon of impacted teeth, hyperodontia, transposition, ankyloses or root resorption and other pathologies in the maxillofacial area. The goal, we put ourselves, is to show how the information from 3D images changes the protocol of the orthodontic treatment. The material, we presented six our clinical cases and the change in the plan of the treatment, which has used after analyzing the information carried on the three planes of CBCT. These cases are casuistic in the orthodontic practice and require individual approach to each of them during their analysis and decision taken. The discussion made by us is in line with reveal of the impacted teeth, where we need to evaluate their vertical depth and mediodistal ratios with the bond structures. At patients with hyperodontia, the assessment is of outmost importance to decide which of the teeth to be extracted and which one to be arranged into the dental arch. The conclusion we make is that diagnostic information is essential for decisions about treatment plan. The exact graphs will lead to better treatment plan and more predictable results. (authors) Key words: CBCT. IMPACTED CANINES. HYPERODONTIA. TRANSPOSITION

  20. Investigating the guiding of streamers in nitrogen/oxygen mixtures with 3D simulations

    Teunissen, Jannis; Nijdam, Sander; Takahashi, Eiichi; Ebert, Ute

    2014-10-01

    Recent experiments by S. Nijdam and E. Takahashi have demonstrated that streamers can be guided by weak pre-ionization in nitrogen/oxygen mixtures, as long as there is not too much oxygen (less than 1%). The pre-ionization was created by a laser beam, and was orders of magnitude lower than the density in a streamer channel. Here, we will study the guiding of streamers with 3D numerical simulations. First, we present simulations that can be compared with the experiments and confirm that the laser pre-ionization does not introduce space charge effects by itself. Then we investigate topics as: the conditions under which guiding can occur; how photoionization reduces the guiding at higher oxygen concentrations and whether guided streamers keep their propagation direction outside the pre-ionization. JT was supported by STW Project 10755, SN by the FY2012 Researcher Exchange Program between JSPS and NWO, and ET by JSPS KAKENHI Grant Number 24560249.

  1. Dynamic 3D cell rearrangements guided by a fibronectin matrix underlie somitogenesis.

    Gabriel G Martins

    Full Text Available Somites are transient segments formed in a rostro-caudal progression during vertebrate development. In chick embryos, segmentation of a new pair of somites occurs every 90 minutes and involves a mesenchyme-to-epithelium transition of cells from the presomitic mesoderm. Little is known about the cellular rearrangements involved, and, although it is known that the fibronectin extracellular matrix is required, its actual role remains elusive. Using 3D and 4D imaging of somite formation we discovered that somitogenesis consists of a complex choreography of individual cell movements. Epithelialization starts medially with the formation of a transient epithelium of cuboidal cells, followed by cell elongation and reorganization into a pseudostratified epithelium of spindle-shaped epitheloid cells. Mesenchymal cells are then recruited to this medial epithelium through accretion, a phenomenon that spreads to all sides, except the lateral side of the forming somite, which epithelializes by cell elongation and intercalation. Surprisingly, an important contribution to the somite epithelium also comes from the continuous egression of mesenchymal cells from the core into the epithelium via its apical side. Inhibition of fibronectin matrix assembly first slows down the rate, and then halts somite formation, without affecting pseudopodial activity or cell body movements. Rather, cell elongation, centripetal alignment, N-cadherin polarization and egression are impaired, showing that the fibronectin matrix plays a role in polarizing and guiding the exploratory behavior of somitic cells. To our knowledge, this is the first 4D in vivo recording of a full mesenchyme-to-epithelium transition. This approach brought new insights into this event and highlighted the importance of the extracellular matrix as a guiding cue during morphogenesis.

  2. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    Sturm, Peter; Maybank, Steve

    1999-01-01

    We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  3. Medical image analysis of 3D CT images based on extensions of Haralick texture features

    Tesař, Ludvík; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S.

    2008-01-01

    Roč. 32, č. 6 (2008), s. 513-520. ISSN 0895-6111 R&D Projects: GA AV ČR 1ET101050403; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * Gaussian mixture model * 3D image analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.192, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/tesar-medical image analysis of 3d ct image s based on extensions of haralick texture features.pdf

  4. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  5. GammaModeler 3-D gamma-ray imaging technology

    The 3-D GammaModelertrademark system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModelertrademark system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders

  6. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  7. Holographic Image Plane Projection Integral 3D Display

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  8. 3-D Reconstruction of Medical Image Using Wavelet Transform and Snake Model

    Jinyong Cheng

    2009-12-01

    Full Text Available Medical image segmentation is an important step in 3-D reconstruction, and 3-D reconstruction from medical images is an important application of computer graphics and biomedicine image processing. An improved image segmentation method which is suitable for 3-D reconstruction is presented in this paper. A 3-D reconstruction algorithm is used to reconstruct the 3-D model from medical images. Rough edge is obtained by multi-scale wavelet transform at first. With the rough edge, improved gradient vector flow snake model is used and the object contour in the image is found. In the experiments, we reconstruct 3-D models of kidney, liver and brain putamen. The performances of the experiments indicate that the new algorithm can produce accurate 3-D reconstruction.

  9. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  10. Seeing is saving: the benefit of 3D imaging in gynecologic brachytherapy.

    Viswanathan, Akila N; Erickson, Beth A

    2015-07-01

    Despite a concerning decline in the use of brachytherapy over the past decade, no other therapy is able to deliver a very high dose of radiation into or near a tumor, with a rapid fall-off of dose to adjacent structures. Compared to traditional X-ray-based brachytherapy that relies on points, the use of CT and MR for 3D planning of gynecologic brachytherapy provides a much more accurate volume-based calculation of dose to an image-defined tumor and to the bladder, rectum, sigmoid, and other pelvic organs at risk (OAR) for radiation complications. The publication of standardized guidelines and an online contouring teaching atlas for performing 3D image-based brachytherapy has created a universal platform for communication and training. This has resulted in a uniform approach to using image-guided brachytherapy for treatment and an internationally accepted format for reporting clinical outcomes. Significant improvements in survival and reductions in toxicity have been reported with the addition of image guidance to increase dose to tumor and decrease dose to the critical OAR. Future improvements in individualizing patient treatments should include a more precise definition of the target. This will allow dose modulation based on the amount of residual disease visualized on images obtained at the time of brachytherapy. PMID:25748646

  11. Superimposing of virtual graphics and real image based on 3D CAD information

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  12. 3-D Imaging Systems for Agricultural Applications—A Review

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  13. 3-D Imaging Systems for Agricultural Applications-A Review.

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  14. 3-D Imaging Systems for Agricultural Applications—A Review

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  15. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  16. Segmented images and 3D images for studying the anatomical structures in MRIs

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  17. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  18. Image Reconstruction from 2D stack of MRI/CT to 3D using Shapelets

    Arathi T

    2014-12-01

    Full Text Available Image reconstruction is an active research field, due to the increasing need for geometric 3D models in movie industry, games, virtual environments and in medical fields. 3D image reconstruction aims to arrive at the 3D model of an object, from its 2D images taken at different viewing angles. Medical images are multimodal, which includes MRI, CT scan image, PET and SPECT images. Of these, MRI and CT scan images of an organ when taken, is available as a stack of 2D images, taken at different angles. This 2D stack of images is used to get a 3D view of the organ of interest, to aid doctors in easier diagnosis. Existing 3D reconstruction techniques are voxel based techniques, which tries to reconstruct the 3D view based on the intensity value stored at each voxel location. These techniques don’t make use of the shape/depth information available in the 2D image stack. In this work, a 3D reconstruction technique for MRI/CT 2D image stack, based on Shapelets has been proposed. Here, the shape/depth information available in each 2D image in the image stack is manipulated to get a 3D reconstruction, which gives a more accurate 3D view of the organ of interest. Experimental results exhibit the efficiency of this proposed technique.

  19. 3D MODELLING FROM UN CALIBRATED IMAGES – A COMPARATIVE STUDY

    Limi V L

    2014-03-01

    Full Text Available 3D modeling is a demanding area of research. Creating a 3D world from sequence of images captured using different mobile cameras pose additional challenge in this field. We plan to explore this area of computer vision to model a 3D world of Indian heritage sites for virtual tourism. In this paper a comparative study of the existing methods used for 3D reconstruction of un-calibrated image sequences was done. The study shows different scenario of modeling 3D objects from un-calibrated images which include community photo collection, images taken from unknown camera, 3D modeling using two un-calibrated images, etc. Hence the different methods available were studied and an overall view of the techniques used in each step of 3D reconstruction was explored. The merits and demerits of each method were also compared.

  20. 3D/2D image registration using weighted histogram of gradient directions

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2015-03-01

    Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.

  1. A theoretical and experimental study on no-guide light pen type 3D-coordinate measurement system

    Zhang, Xiaofang; Yu, Xin; Jiang, Chengzhi; Wang, Baoguang

    2003-04-01

    A novel no-guide light pen type 3D-coordinate measurement system with three sets of position sensitive devices (PSDs) to realize intersection converge imaging is introduced. It is called as the light pen type measurement system, because the measuring head is shaped as a pen with several light sources on it. The structure design, measurement principle and experimental results are presented. The theoretical analysis and experimental results prove that this system has advanced features of simple structure, high automation, and high accuracy, and can be used in the measurement fields of mechanical manufacture, robot, auto, aviation and medicine effectively.

  2. Four-view stereoscopic imaging and display system for web-based 3D image communication

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  3. Imaging 3D strain field monitoring during hydraulic fracturing processes

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  4. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  5. Image Reconstruction from 2D stack of MRI/CT to 3D using Shapelets

    Arathi T; Latha Parameswaran

    2014-01-01

    Image reconstruction is an active research field, due to the increasing need for geometric 3D models in movie industry, games, virtual environments and in medical fields. 3D image reconstruction aims to arrive at the 3D model of an object, from its 2D images taken at different viewing angles. Medical images are multimodal, which includes MRI, CT scan image, PET and SPECT images. Of these, MRI and CT scan images of an organ when taken, is available as a stack of 2D images, taken at different a...

  6. Statistical skull models from 3D X-ray images

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  7. Optimization of spine surgery planning with 3D image templating tools

    Augustine, Kurt E.; Huddleston, Paul M.; Holmes, David R., III; Shridharani, Shyam M.; Robb, Richard A.

    2008-03-01

    The current standard of care for patients with spinal disorders involves a thorough clinical history, physical exam, and imaging studies. Simple radiographs provide a valuable assessment but prove inadequate for surgery planning because of the complex 3-dimensional anatomy of the spinal column and the close proximity of the neural elements, large blood vessels, and viscera. Currently, clinicians still use primitive techniques such as paper cutouts, pencils, and markers in an attempt to analyze and plan surgical procedures. 3D imaging studies are routinely ordered prior to spine surgeries but are currently limited to generating simple, linear and angular measurements from 2D views orthogonal to the central axis of the patient. Complex spinal corrections require more accurate and precise calculation of 3D parameters such as oblique lengths, angles, levers, and pivot points within individual vertebra. We have developed a clinician friendly spine surgery planning tool which incorporates rapid oblique reformatting of each individual vertebra, followed by interactive templating for 3D placement of implants. The template placement is guided by the simultaneous representation of multiple 2D section views from reformatted orthogonal views and a 3D rendering of individual or multiple vertebrae enabling superimposition of virtual implants. These tools run efficiently on desktop PCs typically found in clinician offices or workrooms. A preliminary study conducted with Mayo Clinic spine surgeons using several actual cases suggests significantly improved accuracy of pre-operative measurements and implant localization, which is expected to increase spinal procedure efficiency and safety, and reduce time and cost of the operation.

  8. 3D CT Image-Guided Parallel Mechanism-Assisted Femur Fracture Reduction%3维CT图像导航的并联机构辅助股骨复位方法

    龚敏丽; 徐颖; 唐佩福; 胡磊; 杜海龙; 吕振天; 姚腾洲

    2011-01-01

    Traditionally, the clinical femur fracture reduction surgery is imperfect and often results in misalignment and high intraoperative radiation exposures.To solve the problem, a method for fracture reduction based on preoperative CT image and 6 degrees of freedom parallel mechanism which is fixed to unhealthy femur is proposed.Based on the body's symmetry principle, the method uses contralateral femur as a standard to guide the reduction of femoral shaft fractures.By twelve markers on the implementing machine,the computer can calculate the length of pole in the virtual space in real time.Finally, animal bones experiments show the effectiveness of the approach.%临床上传统股骨复位手术存在复位精确度不高、辐射量大等不足.针对以上不足,提出一种基于术前CT图像引导,由固定于股骨患侧上的6自由度并联机构辅助股骨复位的方法.该方法基于人体的对称性原理,用患者健侧股骨镜像作为标准,指导患侧股骨复位:通过执行机构上的12个标记点,在虚拟空间中实时标记执行机构上每根杆的长度.动物骨实验验证了此复位方法的有效性.

  9. Wearable 3-D Photoacoustic Tomography for Functional Brain Imaging in Behaving Rats

    Tang, Jianbo; Jason E. Coleman; DAI, XIANJIN; Jiang, Huabei

    2016-01-01

    Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments rev...

  10. Performance Evaluating of some Methods in 3D Depth Reconstruction from a Single Image

    Wen, Wei

    2009-01-01

    We studied the problem of 3D reconstruction from a single image. The 3D reconstruction is one of the basic problems in Computer Vision. The 3D reconstruction is usually achieved by using two or multiple images of a scene. However recent researches in Computer Vision field have enabled us to recover the 3D information even from only one single image. The methods used in such reconstructions are based on depth information, projection geometry, image content, human psychology and so on. Each met...

  11. Radon transport modelling: User's guide to RnMod3d

    RnMod3d is a numerical computer model of soil-gas and radon transport in porous media. It can be used, for example, to study radon entry from soil into houses in response to indoor-outdoor pressure differences or changes in atmospheric pressure. It can also be used for flux calculations of radon from the soil surface or to model radon exhalation from building materials such as concrete. The finite-volume model is a technical research tool, and it cannot be used meaningfully without good understanding of the involved physical equations. Some understanding of numerical mathematics and the programming language Pascal is also required. Originally, the code was developed for internal use at Risoe only. With this guide, however, it should be possible for others to use the model. Three-dimensional steady-state or transient problems with Darcy flow of soil gas and combined generation, radioactive decay, diffusion and advection of radon can be solved. Moisture is included in the model, and partitioning of radon between air, water and soil grains (adsorption) is taken into account. Most parameters can change in time and space, and transport parameters (diffusivity and permeability) may be anisotropic. This guide includes benchmark tests based on simple problems with known solutions. RnMod3d has also been part of an international model intercomparison exercise based on more complicated problems without known solutions. All tests show that RnMod3d gives results of good quality. (au)

  12. Radon transport modelling: User's guide to RnMod3d

    Andersen, C.E

    2000-08-01

    RnMod3d is a numerical computer model of soil-gas and radon transport in porous media. It can be used, for example, to study radon entry from soil into houses in response to indoor-outdoor pressure differences or changes in atmospheric pressure. It can also be used for flux calculations of radon from the soil surface or to model radon exhalation from building materials such as concrete. The finite-volume model is a technical research tool, and it cannot be used meaningfully without good understanding of the involved physical equations. Some understanding of numerical mathematics and the programming language Pascal is also required. Originally, the code was developed for internal use at Risoe only. With this guide, however, it should be possible for others to use the model. Three-dimensional steady-state or transient problems with Darcy flow of soil gas and combined generation, radioactive decay, diffusion and advection of radon can be solved. Moisture is included in the model, and partitioning of radon between air, water and soil grains (adsorption) is taken into account. Most parameters can change in time and space, and transport parameters (diffusivity and permeability) may be anisotropic. This guide includes benchmark tests based on simple problems with known solutions. RnMod3d has also been part of an international model intercomparison exercise based on more complicated problems without known solutions. All tests show that RnMod3d gives results of good quality. (au)

  13. Image guided multibeam radiotherapy

    This paper provides an outlook of the status of the first development stages for an updated design of radiotherapy conformal system based on tumor 3D images obtained as an output the last generation imaging machines as PET, CT and MR which offer a very valuable output in cancer diagnosis. Prospective evaluation of current software codes and acquisition of useful experience in surgical planning involves a multidisciplinary process as an initial and unavoidable stage to develop an expert software and user skills which assures the delivery of the radiation dose is done correctly in geometry and value in each voxel as a radiation protection basic condition. The validation of the images obtained has been done by the production of anatomical models of interest regions by rapid prototyping of the 3D segmented images and its evaluation by contrasting with the real regions during surgical procedures. (author)

  14. Image guided multibeam radiotherapy

    This paper provides an outlook of the status of the first development stages for an updated design of radiotherapy conformal system based on tumor 3D images obtained as an output the last generation imaging machines as PET, CT and MR which offer a very valuable output in cancer diagnosis. Prospective evaluation of current software codes and acquisition of useful experience in surgical planning involves a multidisciplinary process as an initial and unavoidable stage to develop an expert software and user skills which assures the delivery of the radiation dose is done correctly in geometry and value in each voxel as a radiation protection basic condition. The validation of the images obtained has been done by the production of anatomical models of interest regions by rapid proto typing of the 3D segmented images and its evaluation by contrasting with the real regions during surgical procedures. (author)

  15. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    Paoli Alessandro

    2011-02-01

    Full Text Available Abstract Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast and preoperative (radiographic template models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology.

  16. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  17. Imaging system for creating 3D block-face cryo-images of whole mice

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  18. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  19. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique. PMID:27410124

  20. A Pipeline for 3D Multimodality Image Integration and Computer-assisted Planning in Epilepsy Surgery

    Nowell, Mark; Rodionov, Roman; Zombori, Gergely; Sparks, Rachel; Rizzi, Michele; Ourselin, Sebastien; Miserocchi, Anna; McEvoy, Andrew; Duncan, John

    2016-01-01

    Epilepsy surgery is challenging and the use of 3D multimodality image integration (3DMMI) to aid presurgical planning is well-established. Multimodality image integration can be technically demanding, and is underutilised in clinical practice. We have developed a single software platform for image integration, 3D visualization and surgical planning. Here, our pipeline is described in step-by-step fashion, starting with image acquisition, proceeding through image co-registration, manual segmen...

  1. D3D augmented reality imaging system: proof of concept in mammography

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  2. Fast fully 3-D image reconstruction in PET using planograms.

    Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W

    2004-04-01

    We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067

  3. Weighted 3D GS Algorithm for Image-Qquality Improvement of Multi-Plane Holographic Display

    李芳; 毕勇; 王皓; 孙敏远; 孔新新

    2012-01-01

    Theoretically,three-dimensional (3D) GS algorithm can realize 3D displays; however,correlation of the output image is restricted because of the interaction among multiple planes,thus failing to meet the image-quality requirements in practical applications.We introduce the weight factors and propose the weighted 3D GS algorithm,which can realize selective control of the correlation of multi-plane display based on the traditional 3D GS algorithm.Improvement in image quality is accomplished by the selection of appropriate weight factors.

  4. Flash trajectory imaging of target 3D motion

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  5. [3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].

    Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu

    2015-08-01

    The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449

  6. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human...

  7. Fully automatic and robust 3D registration of serial-section microscopic images

    Ching-Wei Wang; Eric Budiman Gosno; Yen-Sheng Li

    2015-01-01

    Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robu...

  8. Measurement of Capillary Length from 3D Confocal Images Using Image Analysis and Stereology

    Janáček, Jiří; Saxl, Ivan; Mao, X. W.; Kubínová, Lucie

    Valencia : University of Valencia, 2007. s. 71-71. [Focus on Microscopy FOM 2007. 10.04.2007-13.04.2007, Valencia] Institutional research plan: CEZ:AV0Z50110509; CEZ:AV0Z10190503 Keywords : spo2 * 3D image analysis * capillaries * confocal microscopy Subject RIV: EA - Cell Biology

  9. Infrared imaging of the polymer 3D-printing process

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  10. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  11. Multi-layer 3D imaging using a few viewpoint images and depth map

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  12. Realization of real-time interactive 3D image holographic display [Invited].

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed. PMID:26835944

  13. Feasibility of multimodal 3D neuroimaging to guide implantation of intracranial EEG electrodes

    R. Rodionov; Vollmar, C.; Nowell, M.; Miserocchi, A; Wehner, T; Micallef, C; Zombori, G.; Ourselin, S; Diehl, B.; McEvoy, A.W.; Duncan, J S

    2013-01-01

    Summary Background Since intracranial electrode implantation has limited spatial sampling and carries significant risk, placement has to be effective and efficient. Structural and functional imaging of several different modalities contributes to localising the seizure onset zone (SoZ) and eloquent cortex. There is a need to summarise and present this information throughout the pre/intra/post-surgical course. Methods We developed and implemented a multimodal 3D neuroimaging (M3N) pipeline to g...

  14. MR imaging in epilepsy with use of 3D MP-RAGE

    Tanaka, Akio; Ohno, Sigeru; Sei, Tetsuro; Kanazawa, Susumu; Yasui, Koutaro; Kuroda, Masahiro; Hiraki, Yoshio; Oka, Eiji [Okayama Univ. (Japan). School of Medicine

    1996-06-01

    The patients were 40 males and 33 females; their ages ranged from 1 month to 39 years (mean: 15.7 years). The patients underwent MR imaging, including spin-echo T{sub 1}-weighted, turbo spin-echo proton density/T{sub 2}-weighted, and 3D magnetization-prepared rapid gradient-echo (3D MP-RAGE) images. These examinations disclosed 39 focal abnormalities. On visual evaluation, the boundary of abnormal gray matter in the neuronal migration disorder (NMD) cases was most clealy shown on 3D MP-RAGE images as compared to the other images. This is considered to be due to the higher spatial resolution and the better contrast of the 3D MP-RAGE images than those of the other techniques. The relative contrast difference between abnormal gray matter and the adjacent white matter was also assessed. The results revealed that the contrast differences on the 3D MP-RAGE images were larger than those on the other images; this was statistically significant. Although the sensitivity of 3D MP-RAGE for NMD was not specifically evaluated in this study, the possibility of this disorder, in cases suspected on other images, could be ruled out. Thus, it appears that the specificity with respect to NMD was at least increased with us of 3D MP-RAGE. 3D MP-RAGE also enabled us to build three-dimensional surface models that were helpful in understanding the three-dimensional anatomy. Furthermore. 3D MP-RAGE was considered to be the best technique for evaluating hippocampus atrophy in patients with MTS. On the other hand, the sensitivity in the signal change of the hippocampus was higher on T{sub 2}-weighted images. In addition, demonstration of cortical tubers of tuberous sclerosis in neurocutaneous syndrome was superior on T{sub 2}-weighted images than on 3D MP-RAGE images. (K.H.)

  15. MR imaging in epilepsy with use of 3D MP-RAGE

    The patients were 40 males and 33 females; their ages ranged from 1 month to 39 years (mean: 15.7 years). The patients underwent MR imaging, including spin-echo T1-weighted, turbo spin-echo proton density/T2-weighted, and 3D magnetization-prepared rapid gradient-echo (3D MP-RAGE) images. These examinations disclosed 39 focal abnormalities. On visual evaluation, the boundary of abnormal gray matter in the neuronal migration disorder (NMD) cases was most clealy shown on 3D MP-RAGE images as compared to the other images. This is considered to be due to the higher spatial resolution and the better contrast of the 3D MP-RAGE images than those of the other techniques. The relative contrast difference between abnormal gray matter and the adjacent white matter was also assessed. The results revealed that the contrast differences on the 3D MP-RAGE images were larger than those on the other images; this was statistically significant. Although the sensitivity of 3D MP-RAGE for NMD was not specifically evaluated in this study, the possibility of this disorder, in cases suspected on other images, could be ruled out. Thus, it appears that the specificity with respect to NMD was at least increased with us of 3D MP-RAGE. 3D MP-RAGE also enabled us to build three-dimensional surface models that were helpful in understanding the three-dimensional anatomy. Furthermore. 3D MP-RAGE was considered to be the best technique for evaluating hippocampus atrophy in patients with MTS. On the other hand, the sensitivity in the signal change of the hippocampus was higher on T2-weighted images. In addition, demonstration of cortical tubers of tuberous sclerosis in neurocutaneous syndrome was superior on T2-weighted images than on 3D MP-RAGE images. (K.H.)

  16. Automatic extraction of abnormal signals from diffusion-weighted images using 3D-ACTIT

    Recent developments in medical imaging equipment have made it possible to acquire large amounts of image data and to perform detailed diagnosis. However, it is difficult for physicians to evaluate all of the image data obtained. To address this problem, computer-aided detection (CAD) and expert systems have been investigated. In these investigations, as the types of images used for diagnosis has expanded, the requirements for image processing have become more complex. We therefore propose a new method which we call Automatic Construction of Tree-structural Image Transformation (3D-ACTIT) to perform various 3D image processing procedures automatically using instance-based learning. We have conducted research on diffusion-weighted image (DWI) data and its processing. In this report, we describe how 3D-ACTIT performs processing to extract only abnormal signal regions from 3D-DWI data. (author)

  17. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  18. 3D Imaging of individual particles : a review

    Pirard, Eric

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear disti...

  19. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Eric Pirard

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. Th...

  20. 3D imaging of individual particles: a review:

    Pirard, Eric

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. Th...

  1. 3D Imaging in Heavy-Ion Reactions

    Brown, David A.; Danielewicz, Pawel; Heffner, Mike; Soltz, Ron

    2004-01-01

    We report an extension of the source imaging method for imaging full three-dimensional sources from three-dimensional like-pair correlations. Our technique consists of expanding the correlation data and the underlying source function in spherical harmonics and inverting the resulting system of one-dimensional integral equations. With this method of attack, we can image the source function quickly, even with the extremely large data sets common in three-dimensional analyses. We apply our metho...

  2. Geometric Aspects in 3D Biomedical Image Processing

    Thévenaz, P; Unser, M.

    1998-01-01

    We present some issues that arise when a geometric transformation is performed on an image or a volume. In particular, we illustrate the well-known problems of blocking, blurring, aliasing and ringing. Although the solution to these problems is trivial in an analog (optical) image processing system, their solution in a discrete (numeric) context is much more difficult. The modern trend of biomedical image processing is to fight these artifacts by using more sophisticated models that emphasize...

  3. Auto-masked 2D/3D image registration and its validation with clinical cone-beam computed tomography

    Image-guided alignment procedures in radiotherapy aim at minimizing discrepancies between the planned and the real patient setup. For that purpose, we developed a 2D/3D approach which rigidly registers a computed tomography (CT) with two x-rays by maximizing the agreement in pixel intensity between the x-rays and the corresponding reconstructed radiographs from the CT. Moreover, the algorithm selects regions of interest (masks) in the x-rays based on 3D segmentations from the pre-planning stage. For validation, orthogonal x-ray pairs from different viewing directions of 80 pelvic cone-beam CT (CBCT) raw data sets were used. The 2D/3D results were compared to corresponding standard 3D/3D CBCT-to-CT alignments. Outcome over 8400 2D/3D experiments showed that parametric errors in root mean square were <0.18° (rotations) and <0.73 mm (translations), respectively, using rank correlation as intensity metric. This corresponds to a mean target registration error, related to the voxels of the lesser pelvis, of <2 mm in 94.1% of the cases. From the results we conclude that 2D/3D registration based on sequentially acquired orthogonal x-rays of the pelvis is a viable alternative to CBCT-based approaches if rigid alignment on bony anatomy is sufficient, no volumetric intra-interventional data set is required and the expected error range fits the individual treatment prescription. (paper)

  4. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated

  5. Efficient RPG detection in noisy 3D image data

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  6. 3D Image Sensor based on Parallax Motion

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  7. Spectroscopy and 3D imaging of the Crab nebula

    Cadez, A; Vidrih, S

    2004-01-01

    Spectroscopy of the Crab nebula along different slit directions reveals the 3 dimensional structure of the optical nebula. On the basis of the linear radial expansion result first discovered by Trimble (1968), we make a 3D model of the optical emission. Results from a limited number of slit directions suggest that optical lines originate from a complicated array of wisps that are located in a rather thin shell, pierced by a jet. The jet is certainly not prominent in optical emission lines, but the direction of the piercing is consistent with the direction of the X-ray and radio jet. The shell's effective radius is ~ 79 seconds of arc, its thickness about a third of the radius and it is moving out with an average velocity 1160 km/s.

  8. 3D Wavelet-based Fusion Techniques for Biomedical Imaging

    Rubio Guivernau, José Luis

    2012-01-01

    Hoy en día las técnicas de adquisición de imágenes tridimensionales son comunes en diversas áreas, pero cabe destacar la relevancia que han adquirido en el ámbito de la imagen biomédica, dentro del cual encontramos una amplia gama de técnicas como la microscopía confocal, microscopía de dos fotones, microscopía de fluorescencia mediante lámina de luz, resonancia magnética nuclear, tomografía por emisión de positrones, tomografía de coherencia óptica, ecografía 3D y un largo etcétera. Un denom...

  9. Improvement of integral 3D image quality by compensating for lens position errors

    Okui, Makoto; Arai, Jun; Kobayashi, Masaki; Okano, Fumio

    2004-05-01

    Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device with about 18,000 lenses using a super-high-definition camera with 2,000 scanning lines. Positional errors of these high-precision lenses as well as the camera's lenses will cause distortions in the elemental image, which directly affect the quality of the 3-D image and the viewing area. We have devised a way to compensate for such geometrical position errors and used it for the integral 3-D TV prototype, resulting in an improvement in both viewing zone and picture quality.

  10. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  11. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    statistically significant. Conclusions: The proposed algorithm eliminates the need for any population based model parameters in monoscopic image guided radiotherapy and allows accurate and real-time 3D tumor localization on current standard LINACs with a single x-ray imager.

  12. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    2013-01-01

    An ultrasound imaging system (300) includes a transducer array (302) with a two- dimensional array of transducer elements configured to transmit an ultrasound signal and receive echoes, transmit circuitry (304) configured to control the transducer array to transmit the ultrasound signal so as to...... the same received set of two dimensional echoes form part of the imaging system...

  13. 3D image reconstruction of fiber systems using electron tomography

    Over the past several years, electron microscopists and materials researchers have shown increased interest in electron tomography (reconstruction of three-dimensional information from a tilt series of bright field images obtained in a transmission electron microscope (TEM)). In this research, electron tomography has been used to reconstruct a three-dimensional image for fiber structures from secondary electron images in a scanning electron microscope (SEM). The implementation of this technique is used to examine the structure of fiber system before and after deformation. A test sample of steel wool was tilted around a single axis from −10° to 60° by one-degree steps with images taken at every degree; three-dimensional images were reconstructed for the specimen of fine steel fibers. This method is capable of reconstructing the three-dimensional morphology of this type of lineal structure, and to obtain features such as tortuosity, contact points, and linear density that are of importance in defining the mechanical properties of these materials. - Highlights: • The electron tomography technique has been adapted to the SEM for analysis of linear structures. • Images are obtained by secondary electron imaging through a given depth of field, making them analogous to projected images. • Quantitative descriptions of the microstructure can be obtained including tortuosity and contact points per volume

  14. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  15. In vivo 3D neuroanatomical evaluation of periprostatic nerve plexus with 3T-MR Diffusion Tensor Imaging

    Panebianco, Valeria, E-mail: valeria.panebianco@gmail.com [Department of Radiological Sciences, Oncology and Anatomical Pathology, Sapienza University of Rome, Viale Regina Elena, 324, 00161 Rome (Italy); Barchetti, Flavio [Department of Radiological Sciences, Oncology and Anatomical Pathology, Sapienza University of Rome, Viale Regina Elena, 324, 00161 Rome (Italy); Sciarra, Alessandro [Department of Urology, Sapienza University of Rome (Italy); Marcantonio, Andrea; Zini, Chiara [Department of Radiological Sciences, Oncology and Anatomical Pathology, Sapienza University of Rome, Viale Regina Elena, 324, 00161 Rome (Italy); Salciccia, Stefano [Department of Urology, Sapienza University of Rome (Italy); Collettini, Federico [Department of Radiology, Charité, Berlin (Germany); Gentile, Vincenzo [Department of Urology, Sapienza University of Rome (Italy); Hamm, Bernard [Department of Radiology, Charité, Berlin (Germany); Catalano, Carlo [Department of Radiological Sciences, Oncology and Anatomical Pathology, Sapienza University of Rome, Viale Regina Elena, 324, 00161 Rome (Italy)

    2013-10-01

    Objectives: To evaluate if Diffusion Tensor Imaging technique (DTI) can improve the visualization of periprostatic nerve fibers describing the location and distribution of entire neurovascular plexus around the prostate in patients who are candidates for prostatectomy. Materials and methods: Magnetic Resonance Imaging (MRI), including a 2D T2-weighted FSE sequence in 3 planes, 3D T2-weighted and DTI using 16 gradient directions and b = 0 and 1000, was performed on 36 patients. Three out of 36 patients were excluded from the analysis due to poor image quality (blurring N = 2, artifact N = 1). The study was approved by local ethics committee and all patients gave an informed consent. Images were evaluated by two radiologists with different experience in MRI. DTI images were analyzed qualitatively using dedicated software. Also 2D and 3D T2 images were independently considered. Results: 3D-DTI allowed description of the entire plexus of the periprostatic nerve fibers in all directions, while 2D and 3D T2 morphological sequences depicted part of the fibers, in a plane by plane analysis of fiber courses. DTI demonstrated in all patients the dispersion of nerve fibers around the prostate on both sides including the significant percentage present in the anterior and anterolateral sectors. Conclusions: DTI offers optimal representation of the widely distributed periprostatic plexus. If validated, it may help guide nerve-sparing radical prostatectomy.

  16. In vivo 3D neuroanatomical evaluation of periprostatic nerve plexus with 3T-MR Diffusion Tensor Imaging

    Objectives: To evaluate if Diffusion Tensor Imaging technique (DTI) can improve the visualization of periprostatic nerve fibers describing the location and distribution of entire neurovascular plexus around the prostate in patients who are candidates for prostatectomy. Materials and methods: Magnetic Resonance Imaging (MRI), including a 2D T2-weighted FSE sequence in 3 planes, 3D T2-weighted and DTI using 16 gradient directions and b = 0 and 1000, was performed on 36 patients. Three out of 36 patients were excluded from the analysis due to poor image quality (blurring N = 2, artifact N = 1). The study was approved by local ethics committee and all patients gave an informed consent. Images were evaluated by two radiologists with different experience in MRI. DTI images were analyzed qualitatively using dedicated software. Also 2D and 3D T2 images were independently considered. Results: 3D-DTI allowed description of the entire plexus of the periprostatic nerve fibers in all directions, while 2D and 3D T2 morphological sequences depicted part of the fibers, in a plane by plane analysis of fiber courses. DTI demonstrated in all patients the dispersion of nerve fibers around the prostate on both sides including the significant percentage present in the anterior and anterolateral sectors. Conclusions: DTI offers optimal representation of the widely distributed periprostatic plexus. If validated, it may help guide nerve-sparing radical prostatectomy

  17. Analytic 3D image reconstruction using all detected events

    We present the results of testing a previously presented algorithm for three-dimensional image reconstruction that uses all gamma-ray coincidence events detected by a PET volume-imaging scanner. By using two iterations of an analytic filter-backprojection method, the algorithm is not constrained by the requirement of a spatially invariant detector point spread function, which limits normal analytic techniques. Removing this constraint allows the incorporation of all detected events, regardless of orientation, which improves the statistical quality of the final reconstructed image

  18. Towards the 3D-Imaging of Sources

    Danielewicz, P; Heffner, M; Pratt, S; Soltz, R A

    2004-01-01

    Geometric details of a nuclear reaction zone, at the time of particle emission, can be restored from low relative-velocity particle-correlations, following imaging. Some of the source details get erased and are a potential cause of problems in the imaging, in the form of instabilities. These can be coped with by following the method of discretized optimization for the restored sources. So far it has been possible to produce 1-dimensional emission source images, corresponding to the reactions averaged over all possible spatial directions. Currently, efforts are in progress to restore angular details.

  19. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  20. Real-time auto-stereoscopic visualization of 3D medical images

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  1. Building Extraction from DSM Acquired by Airborne 3D Image

    YOU Hongjian; LI Shukai

    2003-01-01

    Segmentation and edge regulation are studied deeply to extract buildings from DSM data produced in this paper. Building segmentation is the first step to extract buildings, and a new segmentation method-adaptive iterative segmentation considering ratio mean square-is proposed to extract the contour of buildings effectively. A sub-image (such as 50× 50 pixels )of the image is processed in sequence,the average gray level and its ratio mean square are calculated first, then threshold of the sub-image is selected by using iterative threshold segmentation. The current pixel is segmented according to the threshold, the aver-age gray level and the ratio mean square of the sub-image. The edge points of the building are grouped according to the azimuth of neighbor points, and then the optimal azimuth of the points that belong to the same group can be calculated by using line interpolation.

  2. Online reconstruction of 3D magnetic particle imaging data.

    Knopp, T; Hofmann, M

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668

  3. Online reconstruction of 3D magnetic particle imaging data

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  4. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  5. Computational 3D and reflectivity imaging with high photon efficiency

    Shin, Dongeek; Kirmani, Ahmed; Shapiro, Jeffrey H.; Goyal, Vivek K

    2014-01-01

    Capturing depth and reflectivity images at low light levels from active illumination of a scene has wide-ranging applications. Conventionally, even with single-photon detectors, hundreds of photon detections are needed at each pixel to mitigate Poisson noise. We introduce a robust method for estimating depth and reflectivity using on the order of 1 detected photon per pixel averaged over the scene. Our computational imager combines physically accurate single-photon counting statistics with ex...

  6. Critical Comparison of 3-d Imaging Approaches for NGST

    Bennett, Charles L.

    1999-01-01

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; b...

  7. The European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) report on 3D CT-based in-room image guidance systems: a practical and technical review and guide

    Korreman, Stine; Rasch, Coen; McNair, Helen;

    2010-01-01

    centres demonstrates a wide variability based on local practices. This report whilst comprehensive is not exhaustive as this area of development remains a very active field for research and development. However, it should serve as a practical guide and framework for all professional groups within the......The past decade has provided many technological advances in radiotherapy. The European Institute of Radiotherapy (EIR) was established by the European Society of Therapeutic Radiology and Oncology (ESTRO) to provide current consensus statement with evidence-based and pragmatic guidelines on topics...

  8. Improved 3D cellular imaging by multispectral focus assessment

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  9. 3D Surface Imaging of the Human Female Torso in Upright to Supine Positions

    Reece, Gregory P.; Merchant, Fatima; Andon, Johnny; Khatam, Hamed; Ravi-Chandar, K.; Weston, June; Fingeret, Michelle C.; Lane, Chris; Duncan, Kelly; Markey, Mia K.

    2015-01-01

    Three-dimensional (3D) surface imaging of breasts is usually done with the patient in an upright position, which does not permit comparison of changes in breast morphology with changes in position of the torso. In theory, these limitations may be eliminated if the 3D camera system could remain fixed relative to the woman’s torso as she is tilted from 0 to 90 degrees. We mounted a 3dMDtorso imaging system onto a bariatric tilt table to image breasts at different tilt angles. The images were va...

  10. First images and orientation of internal waves from a 3-D seismic oceanography data set

    T. M. Blacic

    2009-10-01

    Full Text Available We present 3-D images of ocean finestructure from a unique industry-collected 3-D multichannel seismic dataset from the Gulf of Mexico that includes expendable bathythermograpgh casts for both swaths. 2-D processing reveals strong laterally continuous reflectors throughout the upper ~800 m as well as a few weaker but still distinct reflectors as deep as ~1100 m. Two bright reflections are traced across the 225-m-wide swath to produce reflector surface images that show the 3-D structure of internal waves. We show that the orientation of internal wave crests can be obtained by calculating the orientations of contours of reflector relief. Preliminary 3-D processing further illustrates the potential of 3-D seismic data in interpreting images of oceanic features such as internal wave strains. This work demonstrates the viability of imaging oceanic finestructure in 3-D and shows that, beyond simply providing a way to see what oceanic finestructure looks like, quantitative information such as the spatial orientation of features like internal waves and solitons can be obtained from 3-D seismic images. We expect complete, optimized 3-D processing to improve both the signal to noise ratio and spatial resolution of our images resulting in increased options for analysis and interpretation.

  11. First images and orientation of fine structure from a 3-D seismic oceanography data set

    T. M. Blacic

    2010-04-01

    Full Text Available We present 3-D images of ocean fine structure from a unique industry-collected 3-D multichannel seismic dataset from the Gulf of Mexico that includes expendable bathythermograph casts for both swaths. 2-D processing reveals strong laterally continuous reflections throughout the upper ~800 m as well as a few weaker but still distinct reflections as deep as ~1100 m. We interpret the reflections to be caused by reversible fine structure from internal wave strains. Two bright reflections are traced across the 225-m-wide swath to produce reflection surface images that illustrate the 3-D nature of ocean fine structure. We show that the orientation of linear features in a reflection can be obtained by calculating the orientations of contours of reflection relief, or more robustly, by fitting a sinusoidal surface to the reflection. Preliminary 3-D processing further illustrates the potential of 3-D seismic data in interpreting images of oceanic features such as internal wave strains. This work demonstrates the viability of imaging oceanic fine structure in 3-D and shows that, beyond simply providing a way visualize oceanic fine structure, quantitative information such as the spatial orientation of features like fronts and solitons can be obtained from 3-D seismic images. We expect complete, optimized 3-D processing to improve both the signal to noise ratio and spatial resolution of our images resulting in increased options for analysis and interpretation.

  12. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  13. Radar Imaging of Spheres in 3D using MUSIC

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  14. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    Armando Viviano Razionale

    2013-02-01

    Full Text Available In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces through the digitalization of both patients’ mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  15. Multithreaded real-time 3D image processing software architecture and implementation

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  16. 3D-CT imaging processing for qualitative and quantitative analysis of maxillofacial cysts and tumors

    The objective of this study was to evaluate spiral-computed tomography (3D-CT) images of 20 patients presenting with cysts and tumors in the maxillofacial complex, in order to compare the surface and volume techniques of image rendering. The qualitative and quantitative appraisal indicated that the volume technique allowed a more precise and accurate observation than the surface method. On the average, the measurements obtained by means of the 3D volume-rendering technique were 6.28% higher than those obtained by means of the surface method. The sensitivity of the 3D surface technique was lower than that of the 3D volume technique for all conditions stipulated in the diagnosis and evaluation of lesions. We concluded that the 3D-CT volume rendering technique was more reproducible and sensitive than the 3D-CT surface method, in the diagnosis, treatment planning and evaluation of maxillofacial lesions, especially those with intra-osseous involvement. (author)

  17. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  18. 3-D capacitance density imaging of fluidized bed

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  19. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  20. 3D spectral imaging system for anterior chamber metrology

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  1. Detection of tibial condylar fractures using 3D imaging with a mobile image amplifier (Siemens ISO-C-3D): Comparison with plain films and spiral CT

    Purpose: To analyze a prototype mobile C-arm 3D image amplifier in the detection and classification of experimental tibial condylar fractures with multiplanar reconstructions (MPR). Method: Human knee specimens (n=22) with tibial condylar fractures were examined with a prototype C-arm (ISO-C-3D, Siemens AG), plain films (CR) and spiral CT (CT). The motorized C-arm provides fluoroscopic images during a 190 orbital rotation computing a 119 mm data cube. From these 3D data sets MP reconstructions were obtained. All images were evaluated by four independent readers for the detection and assessment of fracture lines. All fractures were classified according to the Mueller AO classification. To confirm the results, the specimens were finally surgically dissected. Results: 97% of the tibial condylar fractures were easily seen and correctly classified according to the Mueller AO classification on MP reconstruction of the ISO-C-3D. There is no significant difference between ISO-C and CT in detection and correct classification of fractures, but ISO-CD-3D is significant by better than CR. (orig.)

  2. Space Radar Image of Kilauea, Hawaii in 3-D

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  3. 3D microscopic imaging and evaluation of tubular tissue architecture

    Janáček, Jiří; Čapek, Martin; Michálek, Jan; Karen, Petr; Kubínová, Lucie

    2014-01-01

    Roč. 63, Suppl.1 (2014), S49-S55. ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LH13028; GA ČR(CZ) GA13-12412S Institutional support: RVO:67985823 Keywords : confocal microscopy * capillaries * brain * skeletal muscle * image analysis Subject RIV: EA - Cell Biology Impact factor: 1.293, year: 2014

  4. Task-specific evaluation of 3D image interpolation techniques

    Grevera, George J.; Udupa, Jayaram K.; Miki, Yukio

    1998-06-01

    Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. At a previous meeting, we presented a framework for the task independent comparison of interpolation methods based on a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this new work, we present an objective, task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of Multiple Sclerosis (MS) patients. Sixty lesion detection experiments coming from ten patient studies, two subsampling techniques and the original data, and 3 interpolation methods is presented along with a statistical analysis of the results. This work comprises a systematic framework for the task-specific comparison of interpolation methods. Specifically, the influence of three interpolation methods in MS lesion quantification is compared.

  5. Segmentation of Carotid Arteries from 3D and 4D Ultrasound Images

    Mattsson, Per; Eriksson, Andreas

    2002-01-01

    This thesis presents a 3D semi-automatic segmentation technique for extracting the lumen surface of the Carotid arteries including the bifurcation from 3D and 4D ultrasound examinations. Ultrasound images are inherently noisy. Therefore, to aid the inspection of the acquired data an adaptive edge preserving filtering technique is used to reduce the general high noise level. The segmentation process starts with edge detection with a recursive and separable 3D Monga-Deriche-Canny operator. To r...

  6. Understanding Immersivity: Image Generation and Transformation Processes in 3D Immersive Environments

    Kozhevnikov, Maria; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentatio...

  7. Understanding immersivity: Image generation and transformation processes in 3D immersive environments

    Maria eKozhevnikov; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard & Metzler (1971) mental rotation task across the following three types of visual presentation enviro...

  8. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    S. P. Singh; K. Jain; V. R. Mandla

    2014-01-01

    3D city model is a digital representation of the Earth's surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based m...

  9. A 3-D fluorescence imaging system incorporating structured illumination technology

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  10. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    Lee, Sangyun; Kim, Kyoohyun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified inclu...

  11. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  12. Space Radar Image of Long Valley, California - 3D view

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  13. Space Radar Image of Long Valley, California in 3-D

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are

  14. Space Radar Image of Missoula, Montana in 3-D

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA

  15. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  16. Analysis of 3D confocal images of capillaries

    Janáček, Jiří; Saxl, Ivan; Mao, X. W.; Eržen, I.; Kubínová, Lucie

    Saint-Etienne : International society for stereology, 2007, s. 12-15. [International congress for stereology /12./. Saint-Etienne (FR), 03.09.2007-07.09.2007] R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509; CEZ:AV0Z10190503 Keywords : capillaries * confocal microscopy * image analysis Subject RIV: EA - Cell Biology

  17. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection

    Grzegorczyk, Tomasz M.; Meaney, Paul M.; Kaufman, Peter A.; DiFlorio-Alexander, Roberta M.; Paulsen, Keith D.

    2012-01-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to ...

  18. Study of bone implants based on 3D images

    Grau, S; Ayala Vallespí, M. Dolors; Tost Pardell, Daniela; Miño, N.; Muñoz, F.; González, A

    2005-01-01

    New medical input technologies together with computer graphics modelling and visualization software have opened a new track for biomedical sciences: the so-called in-silice experimentation, in which analysis and measurements are done on computer graphics models constructed on the basis of medical images, complementing the traditional in-vivo and in-vitro experimental methods. In this paper, we describe an in-silice experiment to evaluate bio-implants f...

  19. Evaluation of a new method for stenosis quantification from 3D x-ray angiography images

    Betting, Fabienne; Moris, Gilles; Knoplioch, Jerome; Trousset, Yves L.; Sureda, Francisco; Launay, Laurent

    2001-05-01

    A new method for stenosis quantification from 3D X-ray angiography images has been evaluated on both phantom and clinical data. On phantoms, for the parts larger or equal to 3 mm, the standard deviation of the measurement error has always found to be less or equal to 0.4 mm, and the maximum measurement error less than 0.17 mm. No clear relationship has been observed between the performances of the quantification method and the acquisition FoV. On clinical data, the 3D quantification method proved to be more robust to vessel bifurcations than its 3D equivalent. On a total of 15 clinical cases, the differences between 2D and 3D quantification were always less than 0.7 mm. The conclusion is that stenosis quantification from 3D X-4ay angiography images is an attractive alternative to quantification from 2D X-ray images.

  20. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  1. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  2. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  3. A framework for human spine imaging using a freehand 3D ultrasound system

    Purnama, Ketut E.; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; Ooijen, van Peter M.A.; Lubbers, Jaap; Burgerhof, Johannes G.M.; Sardjono, Tri A.; Verkerke, Gijsbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  4. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Ruisong Ye; Wenping Yu

    2012-01-01

    An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency d...

  5. A New Approach for 3D Range Image Segmentation using Gradient Method

    Dina A. Hafiz

    2011-01-01

    Full Text Available Problem statement: Segmentation of 3D range images is widely used in computer vision as an essential pre-processing step before the methods of high-level vision can be applied. Segmentation aims to study and recognize the features of range image such as 3D edges, connected surfaces and smooth regions. Approach: This study presents new improvements in segmentation of terrestrial 3D range images based on edge detection technique. The main idea is to apply a gradient edge detector in three different directions of the 3D range images. This 3D gradient detector is a generalization of the classical sobel operator used with 2D images, which is based on the differences of normal vectors or geometric locations in the coordinate directions. The proposed algorithm uses a 3D-grid structure method to handle large amount of unordered sets of points and determine neighborhood points. It segments the 3D range images directly using gradient edge detectors without any further computations like mesh generation. Our algorithm focuses on extracting important linear structures such as doors, stairs and windows from terrestrial 3D range images these structures are common in indoors and outdoors in many environments. Results: Experimental results showed that the proposed algorithm provides a new approach of 3D range image segmentation with the characteristics of low computational complexity and less sensitivity to noise. The algorithm is validated using seven artificially generated datasets and two real world datasets. Conclusion/Recommendations: Experimental results showed that different segmentation accuracy is achieved by using higher Grid resolution and adaptive threshold.

  6. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. (paper)

  7. HERMES Results on the 3D Imaging of the Nucleon

    Pappalardo, L. L.

    2016-07-01

    It the last decades, a formalism of transverse momentum dependent parton distribution functions (TMDs) and of generalised parton distributions (GPDs) has been developed in the context of non-perturbative QCD, opening the way for a tomographic imaging of the nucleon structure. TMDs and GPDs provide complementary three-dimensional descriptions of the nucleon structure in terms of parton densities. They thus contribute, with different approaches, to the understanding of the full phase-space distribution of partons. A selection of HERMES results sensitive to TMDs is presented.

  8. 3D Synchrotron Imaging of a Directionally Solidified Ternary Eutectic

    Dennstedt, Anne; Helfen, Lukas; Steinmetz, Philipp; Nestler, Britta; Ratke, Lorenz

    2016-03-01

    For the first time, the microstructure of directionally solidified ternary eutectics is visualized in three dimensions, using a high-resolution technique of X-ray tomography at the ESRF. The microstructure characterization is conducted with a photon energy, allowing to clearly discriminate the three phases Ag2Al, Al2Cu, and α-Aluminum solid solution. The reconstructed images illustrate the three-dimensional arrangement of the phases. The Ag2Al lamellae perform splitting and merging as well as nucleation and disappearing events during directional solidification.

  9. A guide for human factors research with stereoscopic 3D displays

    McIntire, John P.; Havig, Paul R.; Pinkus, Alan R.

    2015-05-01

    In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.

  10. MR Imaging of the Internal Auditory Canal and Inner Ear at 3T: Comparison between 3D Driven Equilibrium and 3D Balanced Fast Field Echo Sequences

    Byun, Jun Soo; Kim, Hyung Jin; Yim, Yoo Jeong; Kim, Sung Tae; Jeon, Pyoung; Kim, Keon Ha [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Kim, Sam Soo; Jeon, Yong Hwan; Lee, Ji Won [Kangwon National University College of Medicine, Chuncheon (Korea, Republic of)

    2008-06-15

    To compare the use of 3D driven equilibrium (DRIVE) imaging with 3D balanced fast field echo (bFFE) imaging in the assessment of the anatomic structures of the internal auditory canal (IAC) and inner ear at 3 Tesla (T). Thirty ears of 15 subjects (7 men and 8 women; age range, 22 71 years; average age, 50 years) without evidence of ear problems were examined on a whole-body 3T MR scanner with both 3D DRIVE and 3D bFFE sequences by using an 8-channel sensitivity encoding (SENSE) head coil. Two neuroradiologists reviewed both MR images with particular attention to the visibility of the anatomic structures, including four branches of the cranial nerves within the IAC, anatomic structures of the cochlea, vestibule, and three semicircular canals. Although both techniques provided images of relatively good quality, the 3D DRIVE sequence was somewhat superior to the 3D bFFE sequence. The discrepancies were more prominent for the basal turn of the cochlea, vestibule, and all semicircular canals, and were thought to be attributed to the presence of greater magnetic susceptibility artifacts inherent to gradient-echo techniques such as bFFE. Because of higher image quality and less susceptibility artifacts, we highly recommend the employment of 3D DRIVE imaging as the MR imaging choice for the IAC and inner ear

  11. Parallel 3-D image processing for nuclear emulsion

    The history of nuclear plate was explained. The first nuclear plate was named as pellicles covered with 600 μm of emulsion in Europe. In Japan Emulsion Cloud Chamber (ECC) using thin emulsion (50 μm) type nuclear plate was developed in 1960. Then, the semi-automatic analyzer (1971) and automatic analyzer (1980), Track Selector (TS) with memory stored 16 layer images in 512 x 512 x 16 pixel were developed. Moreover, NTS (New Track Selector), speeding up analyzer, was produced for analysis of results of CHORUS experiment in 1996. Simultaneous readout of 16 layer images had been carried out, but UTS (Ultra Track Selector) made possible to progressive treatment of 16 layers of some data and determination of traces in all angles. Direct detection of tau neutrino (VT) was studied by DONUT (FNAL E872) using UTS and nuclear plate. Neutrino beam was produced by 800 GeV proton beam hitting the fixed target. About 1100 phenomena of neutrino reactions were observed during six months of irradiation. 203 phenomena were detected. 4 examples were shown in this paper. OPERA experiment by SK is explained. (S.Y.)

  12. 3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging

    Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.

    2014-01-01

    Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

  13. Radon transport modelling: User's guide to RnMod3d

    Andersen, Claus Erik

    2000-01-01

    RnMod3d is a numerical computer model of soil-gas and radon transport in porous media. It can be used, for example, to study radon entry from soil into houses in response to indoor-outdoor pressure differences or changes in atmospheric pressure. It canalso be used for flux calculations of radon...... from the soil surface or to model radon exhalation from building materials such as concrete. The finite-volume model is a technical research tool, and it cannot be used meaningfully without good understandingof the involved physical equations. Some understanding of numerical mathematics and the...... programming language Pascal is also required. Originally, the code was developed for internal use at Risø only. With this guide, however, it should be possible forothers to use the model. Three-dimensional steady-state or transient problems with Darcy flow of soil gas and combined generation, radioactive...

  14. Contributions in compression of 3D medical images and 2D images

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  15. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  16. D3D augmented reality imaging system: proof of concept in mammography

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  17. Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)

    The purpose of this study was to evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution 'cranial nerve imaging', which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region. (author)

  18. Registration of 2D x-ray images to 3D MRI by generating pseudo-CT data

    van der Bom, M. J.; Pluim, J. P. W.; Gounis, M. J.; van de Kraats, E. B.; Sprinkhuizen, S. M.; Timmer, J.; Homan, R.; Bartels, L. W.

    2011-02-01

    Spatial and soft tissue information provided by magnetic resonance imaging can be very valuable during image-guided procedures, where usually only real-time two-dimensional (2D) x-ray images are available. Registration of 2D x-ray images to three-dimensional (3D) magnetic resonance imaging (MRI) data, acquired prior to the procedure, can provide optimal information to guide the procedure. However, registering x-ray images to MRI data is not a trivial task because of their fundamental difference in tissue contrast. This paper presents a technique that generates pseudo-computed tomography (CT) data from multi-spectral MRI acquisitions which is sufficiently similar to real CT data to enable registration of x-ray to MRI with comparable accuracy as registration of x-ray to CT. The method is based on a k-nearest-neighbors (kNN)-regression strategy which labels voxels of MRI data with CT Hounsfield Units. The regression method uses multi-spectral MRI intensities and intensity gradients as features to discriminate between various tissue types. The efficacy of using pseudo-CT data for registration of x-ray to MRI was tested on ex vivo animal data. 2D-3D registration experiments using CT and pseudo-CT data of multiple subjects were performed with a commonly used 2D-3D registration algorithm. On average, the median target registration error for registration of two x-ray images to MRI data was approximately 1 mm larger than for x-ray to CT registration. The authors have shown that pseudo-CT data generated from multi-spectral MRI facilitate registration of MRI to x-ray images. From the experiments it could be concluded that the accuracy achieved was comparable to that of registering x-ray images to CT data.

  19. A high-level 3D visualization API for Java and ImageJ

    Longair Mark

    2010-05-01

    Full Text Available Abstract Background Current imaging methods such as Magnetic Resonance Imaging (MRI, Confocal microscopy, Electron Microscopy (EM or Selective Plane Illumination Microscopy (SPIM yield three-dimensional (3D data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  20. A Compact, Wide Area Surveillance 3D Imaging LIDAR Providing UAS Sense and Avoid Capabilities Project

    National Aeronautics and Space Administration — Eye safe 3D Imaging LIDARS when combined with advanced very high sensitivity, large format receivers can provide a robust wide area search capability in a very...

  1. Automatic extraction of soft tissues from 3D MRI images of the head

    This paper presents an automatic extraction method of soft tissues from 3D MRI images of the head. A 3D region growing algorithm is used to extract soft tissues such as cerebrum, cerebellum and brain stem. Four information sources are used to control the 3D region growing. Model of each soft tissue has been constructed in advance and provides a 3D region growing space. Head skin area which is automatically extracted from input image provides an unsearchable area. Zero-crossing points are detected by using Laplacian operator, and by examining sign change between neighborhoods. They are used as a control condition in the 3D region growing process. Graylevels of voxels are also directly used to extract each tissue region as a control condition. Experimental results applied to 19 samples show that the method is successful. (author)

  2. Robust Adaptive Segmentation of 3D Medical Images with Level Sets

    Baillard, Caroline; Barillot, Christian; Bouthemy, Patrick

    2000-01-01

    This paper is concerned with the use of the Level Set formalism to segment anatomical structures in 3D medical images (ultrasound or magnetic resonance images). A closed 3D surface propagates towards the desired boundaries through the iterative evolution of a 4D implicit function. The major contribution of this work is the design of a robust evolution model based on adaptive parameters depending on the data. First the step size and the external propagation force factor, both usually predeterm...

  3. Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Guo, Jianya; Mei, Xi; Tang, Kun

    2012-01-01

    Background Traditional anthropometric studies of human face rely on manual measurements of simple features, which are labor intensive and lack of full comprehensive inference. Dense surface registration of three-dimensional (3D) human facial images holds great potential for high throughput quantitative analyses of complex facial traits. However there is a lack of automatic high density registration method for 3D faical images. Furthermore, current approaches of landmark recognition require fu...

  4. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Xing Zhao; Jing-jing Hu; Peng Zhang

    2009-01-01

    Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs) has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed...

  5. Known-component 3D-2D registration for image guidance and quality assurance in spine surgery pedicle screw placement

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-03-01

    Purpose. To extend the functionality of radiographic / fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods. Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREΦ) from planned trajectory. Results. Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TRExConclusions. 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.

  6. Real Time Quantitative 3-D Imaging of Diffusion Flame Species

    Kane, Daniel J.; Silver, Joel A.

    1997-01-01

    A low-gravity environment, in space or ground-based facilities such as drop towers, provides a unique setting for study of combustion mechanisms. Understanding the physical phenomena controlling the ignition and spread of flames in microgravity has importance for space safety as well as better characterization of dynamical and chemical combustion processes which are normally masked by buoyancy and other gravity-related effects. Even the use of so-called 'limiting cases' or the construction of 1-D or 2-D models and experiments fail to make the analysis of combustion simultaneously simple and accurate. Ideally, to bridge the gap between chemistry and fluid mechanics in microgravity combustion, species concentrations and temperature profiles are needed throughout the flame. However, restrictions associated with performing measurements in reduced gravity, especially size and weight considerations, have generally limited microgravity combustion studies to the capture of flame emissions on film or video laser Schlieren imaging and (intrusive) temperature measurements using thermocouples. Given the development of detailed theoretical models, more sophisticated studies are needed to provide the kind of quantitative data necessary to characterize the properties of microgravity combustion processes as well as provide accurate feedback to improve the predictive capabilities of the computational models. While there have been a myriad of fluid mechanical visualization studies in microgravity combustion, little experimental work has been completed to obtain reactant and product concentrations within a microgravity flame. This is largely due to the fact that traditional sampling methods (quenching microprobes using GC and/or mass spec analysis) are too heavy, slow, and cumbersome for microgravity experiments. Non-intrusive optical spectroscopic techniques have - up until now - also required excessively bulky, power hungry equipment. However, with the advent of near-IR diode

  7. Fast Susceptibility-Weighted Imaging (SWI) with 3D Short-Axis Propeller (SAP)-EPI

    Holdsworth, Samantha J.; Yeom, Kristen W.; Moseley, Michael E.; Skare, S.

    2014-01-01

    Purpose Susceptibility-Weighted Imaging (SWI) in neuroimaging can be challenging due to long scan times of 3D Gradient Recalled Echo (GRE), while faster techniques such as 3D interleaved EPI (iEPI) are prone to motion artifacts. Here we outline and implement a 3D Short-Axis Propeller Echo-Planar Imaging (SAP-EPI) trajectory as a faster, motion-correctable approach for SWI. Methods Experiments were conducted on a 3T MRI system. 3D SAP-EPI, 3D iEPI, and 3D GRE SWI scans were acquired on two volunteers. Controlled motion experiments were conducted to test the motion-correction capability of 3D SAP-EPI. 3D SAP-EPI SWI data were acquired on two pediatric patients as a potential alternative to 2D GRE used clinically. Results 3D GRE images had a better target resolution (0.47 × 0.94 × 2mm, scan time = 5min), iEPI and SAP-EPI images (resolution = 0.94 × 0.94 × 2mm) were acquired in a faster scan time (1:52min) with twice the brain coverage. SAP-EPI showed motion-correction capability and some immunity to undersampling from rejected data. Conclusion While 3D SAP-EPI suffers from some geometric distortion, its short scan time and motion-correction capability suggest that SAP-EPI may be a useful alternative to GRE and iEPI for use in SWI, particularly in uncooperative patients. PMID:24956237

  8. Wide area 2D/3D imaging development, analysis and applications

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  9. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn [Dept. of Radiology and Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-10-15

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  10. The application of camera calibration in range-gated 3D imaging technology

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  11. MRI Sequence Images Compression Method Based on Improved 3D SPIHT%基于改进3D SPIHT的MRI序列图像压缩方法

    蒋行国; 李丹; 陈真诚

    2013-01-01

    目的 研究一种有效的MRI序列图像压缩方法.方法 以2组不同数量、不同层厚的MRI序列图像为例,针对3D SPIHT算法运算复杂度,在对D型、L型表项重复判断的不足上,提出了一种改进的3DSPIHT方法;同时,根据MRI序列图像的相关性特点,提出了分组编/解码的方法,结合3D小波变换和应用改进的3D SPIHT方法,实现了MRI序列图像压缩.结果 分组结合改进3D SPIHT方法与2DSPIHT,3D SPIHT相比,能够得到较好重构图像,同时,峰值信噪比(PSNR)提高了1~8 dB左右.结论 在相同码率下,分组结合改进3D SPIHT的方法提高了PSNR和图像恢复质量,可以更好地解决大量MRI序列图像存储与传输问题.%Objective To propose an effective MRI sequence image compression method for solving the storage and transmission problem of large amounts of MRI sequence images. Methods Aimed at alleviating the complexity of computation of 3D Set Partitioning in Hierarchical Trees( SPIHT) algorithm and the deficiency that D or L type table were judged repeatedly, an improved 3 D SPIHT method was presented and two groups of MRI sequence images with different numbers and slice thickness were taken as examples. At the same time, according to the correlation characteristics of MRI sequence images, a method that images were divided into groups and then coded/decoded was put forward in this paper. It combined with 3D wavelet transform and the improved 3D SPIHT method, the MRI sequence image compression was achieved. Results Comparing with the 2D SPIHT and 3D SPIHT methods, the grouping combined with the improved 3D SPIHT method could obtain better reconstructed images and Peak Signal Noise Ratio (PSNR) could be improved by 1 ~ 8 dB as well. Conclusion At the same bit rate, PSNR and image quality of recovery can be improved by the grouping combined with the improved 3D SPIHT method and the storage and transmission problem of large amounts of MRI sequence images can be solved.

  12. Rigid 2D/3D slice-to-volume registration and its application on fluoroscopic CT images

    Registration of single slices from FluoroCT, CineMR, or interventional magnetic resonance imaging to three dimensional (3D) volumes is a special aspect of the two-dimensional (2D)/3D registration problem. Rather than digitally rendered radiographs (DRR), single 2D slice images obtained during interventional procedures are compared to oblique reformatted slices from a high resolution 3D scan. Due to the lack of perspective information and the different imaging geometry, convergence behavior differs significantly from 2D/3D registration applications comparing DRR images with conventional x-ray images. We have implemented a number of merit functions and local and global optimization algorithms for slice-to-volume registration of computed tomography (CT) and FluoroCT images. These methods were tested on phantom images derived from clinical scans for liver biopsies. Our results indicate that good registration accuracy in the range of 0.5 degree sign and 1.0 mm is achievable using simple cross correlation and repeated application of local optimization algorithms. Typically, a registration took approximately 1 min on a standard personal computer. Other merit functions such as pattern intensity or normalized mutual information did not perform as well as cross correlation in this initial evaluation. Furthermore, it appears as if the use of global optimization algorithms such as simulated annealing does not improve reliability or accuracy of the registration process. These findings were also confirmed in a preliminary registration study on five clinical scans. These experiments have, however, shown that a strict breath-hold protocol is inevitable when using rigid registration techniques for lesion localization in image-guided biopsy retrieval. Finally, further possible applications of slice-to-volume registration are discussed

  13. Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy

    Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schrödel, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

    2014-01-01

    3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

  14. SEGMENTATION OF UAV-BASED IMAGES INCORPORATING 3D POINT CLOUD INFORMATION

    A. Vetrivel

    2015-03-01

    Full Text Available Numerous applications related to urban scene analysis demand automatic recognition of buildings and distinct sub-elements. For example, if LiDAR data is available, only 3D information could be leveraged for the segmentation. However, this poses several risks, for instance, the in-plane objects cannot be distinguished from their surroundings. On the other hand, if only image based segmentation is performed, the geometric features (e.g., normal orientation, planarity are not readily available. This renders the task of detecting the distinct sub-elements of the building with similar radiometric characteristic infeasible. In this paper the individual sub-elements of buildings are recognized through sub-segmentation of the building using geometric and radiometric characteristics jointly. 3D points generated from Unmanned Aerial Vehicle (UAV images are used for inferring the geometric characteristics of roofs and facades of the building. However, the image-based 3D points are noisy, error prone and often contain gaps. Hence the segmentation in 3D space is not appropriate. Therefore, we propose to perform segmentation in image space using geometric features from the 3D point cloud along with the radiometric features. The initial detection of buildings in 3D point cloud is followed by the segmentation in image space using the region growing approach by utilizing various radiometric and 3D point cloud features. The developed method was tested using two data sets obtained with UAV images with a ground resolution of around 1-2 cm. The developed method accurately segmented most of the building elements when compared to the plane-based segmentation using 3D point cloud alone.

  15. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  16. 3D multiple-point statistics simulation using 2D training images

    Comunian, A.; Renard, P.; Straubhaar, J.

    2012-03-01

    One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.

  17. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  18. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research

  19. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  20. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Ruisong Ye

    2012-07-01

    Full Text Available An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency domain. Experimental results show that the stego-image looks visually identical to the original host one and the secret image can be effectively extracted upon image processing attacks, which demonstrates strong robustness against a variety of attacks.

  1. Comparison of 3D Synthetic Aperture Imaging and Explososcan using Phantom Measurements

    Rasmussen, Morten Fischer; Férin, Guillaume; Dufait, Rémi;

    2012-01-01

    In this paper, initial 3D ultrasound measurements from a 1024 channel system are presented. Measurements of 3D Synthetic aperture imaging (SAI) and Explososcan are presented and compared. Explososcan is the ’gold standard’ for real-time 3D medical ultrasound imaging. SAI is compared to Explososcan...... by using tissue and wire phantom measurements. The measurements are carried out using a 1024 element 2D transducer and the 1024 channel experimental ultrasound scanner SARUS. To make a fair comparison, the two imaging techniques use the same number of active channels, the same number of emissions per...... frame, and they emit the same amount of energy per frame. The measurements were performed with parameters similar to standard cardiac imaging, with 256 emissions to image a volume spanning 90×90 and 150mm in depth. This results in a frame rate of 20 Hz. The number of active channels is set to 316 from...

  2. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Bieniosek, Matthew F. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, California 94305 (United States); Lee, Brian J. [Department of Mechanical Engineering, Stanford University, 440 Escondido Mall, Stanford, California 94305 (United States); Levin, Craig S., E-mail: cslevin@stanford.edu [Departments of Radiology, Physics, Bioengineering and Electrical Engineering, Stanford University, 300 Pasteur Dr., Stanford, California 94305-5128 (United States)

    2015-10-15

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  3. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  4. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  5. Wearable 3-D Photoacoustic Tomography for Functional Brain Imaging in Behaving Rats.

    Tang, Jianbo; Coleman, Jason E; Dai, Xianjin; Jiang, Huabei

    2016-01-01

    Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments revealed that the in-plane X-Y spatial resolutions were ~200 μm for each acoustic detection layer. The functional imaging capacity of 3D-wPAT was demonstrated by mapping the cerebral oxygen saturation via multi-wavelength irradiation in behaving hyperoxic rats. In addition, we demonstrated that 3D-wPAT could be used for monitoring sensory stimulus-evoked responses in behaving rats by measuring hemodynamic responses in the primary visual cortex during visual stimulation. Together, these results show the potential of 3D-wPAT for brain study in behaving rodents. PMID:27146026

  6. Flatbed-type 3D display systems using integral imaging method

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  7. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  8. Development of 2D, pseudo 3D and 3D x-ray imaging for early diagnosis of breast cancer and rheumatoid arthritis

    By using plane-wave x-rays with synchrotron radiation refraction-based x-ray medical imaging can be used to visualize soft tissue, as reported in this paper. This method comprises two-dimensional (2D) x-ray dark-field imaging (XDFI), the tomosynthesis of pseudo 3D (sliced) x-ray imaging by the adoption of XDFI and 3D x-ray imaging by utilizing a newly devised algorithm. We aim to make contribution to the early diagnosis of breast cancer, which is a major cancer among women, and rheumatoid arthritises which cannot be detected in its early stages. (author)

  9. The diagnostic value of 3D spiral CT imaging of cholangiopancreatic ducts on obstructive jaundice

    Linquan Wu; Xiangbao Yin; Qingshan Wang; Bohua Wu; Xiao Li; Huaqun Fu

    2011-01-01

    Objective: Computerized tomography (CT) plays an important role in the diagnosis of diseases of biliary tract. Recently, three dimensions (3D) spiral CT imaging has been used in surgical diseases gradually. This study was designed to evaluate the diagnostic value of 3D spiral CT imaging of cholangiopancreatic ducts on obstructive jaundice. Methods: Thirty patients with obstructive jaundice had received B-mode ultrasonography, CT, percutaneous transhepatic cholangiography (PTC) or endoscopic retrograde cholangiopancreatography (ERCP), and 3D spiral CT imaging of cholangiopancreatic ducts preoperatively. Then the diagnose accordance rate of these examinational methods were compared after operations. Results: The diagnose accordance rate of 3D spiral CT imaging of cholangiopancreatic ducts was higher than those of B-mode ultraso-nography, CT, or single PTC or ERCP, which showed clear images of bile duct tree and pathological changes. As to malignant obstructive jaundice, this examinational technique could clearly display the adjacent relationship between tumor and liver tissue, biliary ducts, blood vessels, and intrahepatic metastases. Conclusion: 3D spiral CT imaging of cholangiopancreatic ducts has significant value for obstructive diseases of biliary ducts, which provides effective evidence for the feasibility of tumor-resection and surgical options.

  10. Midsagittal plane extraction from brain images based on 3D SIFT

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°. (paper)

  11. Detection of Connective Tissue Disorders from 3D Aortic MR Images Using Independent Component Analysis

    Hansen, Michael Sass; Zhao, Fei; Zhang, Honghai;

    2006-01-01

    A computer-aided diagnosis (CAD) method is reported that allows the objective identification of subjects with connective tissue disorders from 3D aortic MR images using segmentation and independent component analysis (ICA). The first step to extend the model to 4D (3D + time) has also been taken....... ICA is an effective tool for connective tissue disease detection in the presence of sparse data using prior knowledge to order the components, and the components can be inspected visually. 3D+time MR image data sets acquired from 31 normal and connective tissue disorder subjects at end-diastole (R......-wave peak) and at 45\\$\\backslash\\$% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta. The CAD method distinguished between normal and connective tissue disorder subjects with a classification...

  12. Effects of point configuration on the accuracy in 3D reconstruction from biplane images

    Dmochowski, Jacek; Hoffmann, Kenneth R.; Singh, Vikas; Xu, Jinhui; Nazareth, Daryl P.

    2005-01-01

    Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the “cloud” of points) on reconstruction errors for one of these techniques developed in our laboratory. Five type...

  13. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    Khalid M. Hosny; Hafez, Mohamed A.

    2012-01-01

    An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add m...

  14. Automated Algorithm for Carotid Lumen Segmentation and 3D Reconstruction in B-mode images

    Jorge M. S. Pereira; João Manuel R. S. Tavares

    2011-01-01

    The B-mode image system is one of the most popular systems used in the medical area; however it imposes several difficulties in the image segmentation process due to low contrast and noise. Although these difficulties, this image mode is often used in the study and diagnosis of the carotid artery diseases.In this paper, it is described the a novel automated algorithm for carotid lumen segmentation and 3-D reconstruction in B- mode images.

  15. Learning Methods for Recovering 3D Human Pose from Monocular Images

    Agarwal, Ankur; Triggs, Bill

    2004-01-01

    We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We ev...

  16. Digital Image Analysis of Cells : Applications in 2D, 3D and Time

    Pinidiyaarachchi, Amalka

    2009-01-01

    Light microscopes are essential research tools in biology and medicine. Cell and tissue staining methods have improved immensely over the years and microscopes are now equipped with digital image acquisition capabilities. The image data produced require development of specialized analysis methods. This thesis presents digital image analysis methods for cell image data in 2D, 3D and time sequences. Stem cells have the capability to differentiate into specific cell types. The mechanism behind di...

  17. Synthesis of 3D Model of a Magnetic Field-Influenced Body from a Single Image

    Wang, Cuilan; Newman, Timothy; Gallagher, Dennis

    2006-01-01

    A method for recovery of a 3D model of a cloud-like structure that is in motion and deforming but approximately governed by magnetic field properties is described. The method allows recovery of the model from a single intensity image in which the structure's silhouette can be observed. The method exploits envelope theory and a magnetic field model. Given one intensity image and the segmented silhouette in the image, the method proceeds without human intervention to produce the 3D model. In addition to allowing 3D model synthesis, the method's capability to yield a very compact description offers further utility. Application of the method to several real-world images is demonstrated.

  18. Single-pixel 3D imaging with time-based depth resolution

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  19. 3D X-ray microscopy: image formation, tomography and instrumentation

    Selin, Mårten

    2016-01-01

    Tomography in soft X-ray microscopy is an emerging technique for obtaining quantitative 3D structural information about cells. One of its strengths, compared with other techniques, is that it can image intact cells in their near-native state at a few 10 nm’s resolution, without staining. However, the methods for reconstructing 3D-data rely on algorithms that assume projection data, which the images are generally not due to the imaging systems’ limited depth of focus. To bring out the full pot...

  20. Fully 3D PET image reconstruction with a 4D sinogram blurring kernel

    Tohme, Michel S.; Qi, Jinyi [California Univ., Davis, CA (United States). Dept. of Biomedical Engineering; Zhou, Jian

    2011-07-01

    Accurately modeling PET system response is essential for high-resolution image reconstruction. Traditionally, sinogram blurring effects are modeled as a 2D blur in each sinogram plane. Such 2D blurring kernel is insufficient for fully 3D PET data, which has four dimensions. In this paper, we implement a fully 3D PET image reconstruction using a 4D sinogram blurring kernel estimated from point source scans and perform phantom experiments to evaluate the improvements in image quality over methods with existing 2D blurring kernels. The results show that the proposed reconstruction method can achieve better spatial resolution and contrast recovery than existing methods. (orig.)

  1. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    Lee, SangYun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified including the mean refractive index, volume, cylinder, and effective radius of individual hairs. In addition, the effects of hydrogen peroxide on individual downy hairs were investigated.

  2. DART : a 3D model for remote sensing images and radiative budget of earth surfaces

    Gastellu-Etchegorry, J.P.; Grau, E.; Lauret, N.

    2012-01-01

    Modeling the radiative behavior and the energy budget of land surfaces is relevant for many scientific domains such as the study of vegetation functioning with remotely acquired information. DART model (Discrete Anisotropic Radiative Transfer) is developed since 1992. It is one of the most complete 3D models in this domain. It simulates radiative transfer (R.T.) in the optical domain: 3D radiative budget and remote sensing images (i.e., radiance, reflectance, brightness temperature) of vegeta...

  3. Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrotter, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight computer can use the 3-D map of terrain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing aircraft. The aircraft flight tests were performed over Moon-like terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  4. Registration of Real-Time 3-D Ultrasound to Tomographic Images of the Abdominal Aorta.

    Brekken, Reidar; Iversen, Daniel Høyer; Tangen, Geir Arne; Dahl, Torbjørn

    2016-08-01

    The purpose of this study was to develop an image-based method for registration of real-time 3-D ultrasound to computed tomography (CT) of the abdominal aorta, targeting future use in ultrasound-guided endovascular intervention. We proposed a method in which a surface model of the aortic wall was segmented from CT, and the approximate initial location of this model relative to the ultrasound volume was manually indicated. The model was iteratively transformed to automatically optimize correspondence to the ultrasound data. Feasibility was studied using data from a silicon phantom and in vivo data from a volunteer with previously acquired CT. Through visual evaluation, the ultrasound and CT data were seen to correspond well after registration. Both aortic lumen and branching arteries were well aligned. The processing was done offline, and the registration took approximately 0.2 s per ultrasound volume. The results encourage further patient studies to investigate accuracy, robustness and clinical value of the approach. PMID:27156015

  5. Characterizing and reducing crosstalk in printed anaglyph stereoscopic 3D images

    Woods, Andrew J.; Harris, Chris R.; Leggo, Dean B.; Rourke, Tegan M.

    2013-04-01

    The anaglyph three-dimensional (3D) method is a widely used technique for presenting stereoscopic 3D images. Its primary advantages are that it will work on any full-color display and only requires that the user view the anaglyph image using a pair of anaglyph 3D glasses with usually one lens tinted red and the other lens tinted cyan. A common image quality problem of anaglyph 3D images is high levels of crosstalk-the incomplete isolation of the left and right image channels such that each eye sees a "ghost" of the opposite perspective view. In printed anaglyph images, the crosstalk levels are often very high-much higher than when anaglyph images are presented on emissive displays. The sources of crosstalk in printed anaglyph images are described and a simulation model is developed that allows the amount of printed anaglyph crosstalk to be estimated based on the spectral characteristics of the light source, paper, ink set, and anaglyph glasses. The model is validated using a visual crosstalk ranking test, which indicates good agreement. The model is then used to consider scenarios for the reduction of crosstalk in printed anaglyph systems and finds a number of options that are likely to reduce crosstalk considerably.

  6. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  7. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  8. Fusion of laser and image sensory data for 3-D modeling of the free navigation space

    Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

    1994-01-01

    A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

  9. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  10. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  11. 3-D MRI/CT fusion imaging of the lumbar spine

    The objective was to demonstrate the feasibility of MRI/CT fusion in demonstrating lumbar nerve root compromise. We combined 3-dimensional (3-D) computed tomography (CT) imaging of bone with 3-D magnetic resonance imaging (MRI) of neural architecture (cauda equina and nerve roots) for two patients using VirtualPlace software. Although the pathological condition of nerve roots could not be assessed using MRI, myelography or CT myelography, 3-D MRI/CT fusion imaging enabled unambiguous, 3-D confirmation of the pathological state and courses of nerve roots, both inside and outside the foraminal arch, as well as thickening of the ligamentum flavum and the locations, forms and numbers of dorsal root ganglia. Positional relationships between intervertebral discs or bony spurs and nerve roots could also be depicted. Use of 3-D MRI/CT fusion imaging for the lumbar vertebral region successfully revealed the relationship between bone construction (bones, intervertebral joints, and intervertebral disks) and neural architecture (cauda equina and nerve roots) on a single film, three-dimensionally and in color. Such images may be useful in elucidating complex neurological conditions such as degenerative lumbar scoliosis(DLS), as well as in diagnosis and the planning of minimally invasive surgery. (orig.)

  12. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  13. Comparison of 3-D Synthetic Aperture Phased-Array Ultrasound Imaging and Parallel Beamforming

    Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-01-01

    This paper demonstrates that synthetic apertureimaging (SAI) can be used to achieve real-time 3-D ultra-sound phased-array imaging. It investigates whether SAI in-creases the image quality compared with the parallel beam-forming (PB) technique for real-time 3-D imaging. Data areobtained using both...... simulations and measurements with anultrasound research scanner and a commercially available 3.5-MHz 1024-element 2-D transducer array. To limit the probecable thickness, 256 active elements are used in transmit andreceive for both techniques. The two imaging techniques weredesigned for cardiac imaging, which...... requires sequences de-signed for imaging down to 15cm of depth and a frame rateof at least 20Hz. The imaging quality of the two techniquesis investigated through simulations as a function of depth andangle. SAI improved the full-width at half-maximum (FWHM) at low steering angles by 35%, and the 20-d...

  14. Comparison of S3D Display Technology on Image Quality and Viewing Experiences: Active-Shutter 3d TV vs. Passive-Polarized 3DTV

    Yu-Chi Tai, PhD

    2014-05-01

    Full Text Available Background: Stereoscopic 3D TV systems convey depth perception to the viewer by delivering to each eye separately filtered images that represent two slightly different perspectives. Currently two primary technologies are used in S3D televisions: Active shutter systems, which use alternate frame sequencing to deliver a full-frame image to one eye at a time at a fast refresh rate, and Passive polarized systems, which superimpose the two half-frame left-eye and right-eye images at the same time through different polarizing filters. Methods: We compare visual performance in discerning details and perceiving depth, as well as the comfort and perceived display quality in viewing an S3D movie. Results: Our results show that, in presenting details of small targets and in showing low-contrast stimuli, the Active system was significantly better than the Passive in 2D mode, but there was no significant difference between them in 3D mode. Subjects performed better on Passive than Active in 3D mode on a task requiring small vergence changes and quick re-acquisition of stereopsis – a skill related to vergence efficiency while viewing S3D displays. When viewing movies in 3D mode, there was no difference in symptoms of discomfort between Active and Passive systems. When the two systems were put side by side with selected 3D-movie scenes, all of the subjective measures of perceived display quality in 3D mode favored the Passive system, and 10 of 14 comparisons were statistically significant. The Passive system was rated significantly better for sense of immersion, motion smoothness, clarity, color, and 5 categories related to the glasses. Conclusion: Overall, participants felt that it was easier to look at the Passive system for a longer period than the Active system, and the Passive display was selected as the preferred display by 75% (p = 0.0000211 of the subjects.

  15. Improvement of wells turbine performance by means of 3D guide vanes; Sanjigen annai hane ni yoru wells turbine seino no kaizen

    Takao, M.; Kim, T.H. [Saga University, Saga (Japan); Setoguchi, T. [Saga University, Saga (Japan). Faculty of Science and Engineering; Inoue, M. [Kyushu University, Fukuoka (Japan). Faculty of Engineering

    2000-02-25

    Performance of a Wells turbine was improved by equipping 3D guide vanes before and behind a rotor. For further improvement, 3D guide vanes have been proposed in this paper. The performance of the Wells turbine with 2D and 3D guide vanes have been investigated experimentally by model testing under steady flow conditions. Then, the running and starting characteristics in irregular ocean waves have been obtained by a computer simulation. As a result, it is found that both of the running and starting characteristics of the Wells turbine with 3D guide vanes are superior to those of the turbine with 2D guide vanes. (author)

  16. Land surface temperature from INSAT-3D imager data: Retrieval and assimilation in NWP model

    Singh, Randhir; Singh, Charu; Ojha, Satya P.; Kumar, A. Senthil; Kishtawal, C. M.; Kumar, A. S. Kiran

    2016-06-01

    A new algorithm is developed for retrieving the land surface temperature (LST) from the imager radiance observations on board geostationary operational Indian National Satellite (INSAT-3D). The algorithm is developed using the two thermal infrared channels (TIR1 10.3-11.3 µm and TIR2 11.5-12.5 µm) via genetic algorithm (GA). The transfer function that relates LST and thermal radiances is developed using radiative transfer model simulated database. The developed algorithm has been applied on the INSAT-3D observed radiances, and LST retrieved from the developed algorithm has been validated with Moderate Resolution Imaging Spectroradiometer land surface temperature (LST) product. The developed algorithm demonstrates a good accuracy, without significant bias and standard deviations of 1.78 K and 1.41 K during daytime and nighttime, respectively. The newly proposed algorithm performs better than the operational algorithm used for LST retrieval from INSAT-3D satellite. Further, a set of data assimilation experiments is conducted with the Weather Research and Forecasting (WRF) model to assess the impact of INSAT-3D LST on model forecast skill over the Indian region. The assimilation experiments demonstrated a positive impact of the assimilated INSAT-3D LST, particularly on the lower tropospheric temperature and moisture forecasts. The temperature and moisture forecast errors are reduced (as large as 8-10%) with the assimilation of INSAT-3D LST, when compared to forecasts that were obtained without the assimilation of INSAT-3D LST. Results of the additional experiments of comparative performance of two LST products, retrieved from operational and newly proposed algorithms, indicate that the impact of INSAT-3D LST retrieved using newly proposed algorithm is significantly larger compared to the impact of INSAT-3D LST retrieved using operational algorithm.

  17. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T2-weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  18. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  19. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  20. Azimuth–opening angle domain imaging in 3D Gaussian beam depth migration

    Common-image gathers indexed by opening angle and azimuth at imaging points in 3D situations are the key inputs for amplitude-variation-with-angle and velocity analysis by tomography. The Gaussian beam depth migration, propagating each ray by a Gaussian beam form and summing the contributions from all the individual beams to produce the wavefield, can overcome the multipath problem, image steep reflectors and, even more important, provide a convenient and efficient strategy to extract azimuth–opening angle domain common-image gathers (ADCIGs) in 3D seismic imaging. We present a method for computing azimuth and opening angle at imaging points to output 3D ADCIGs by computing the source and receiver wavefield direction vectors which are restricted in the effective region of the corresponding Gaussian beams. In this paper, the basic principle of Gaussian beam migration (GBM) is briefly introduced; the technology and strategy to yield ADCIGs by GBM are analyzed. Numerical tests and field data application demonstrate that the azimuth–opening angle domain imaging method in 3D Gaussian beam depth migration is effective. (paper)

  1. Audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI

    Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.

    2014-03-01

    The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.

  2. Comparison of S3D Display Technology on Image Quality and Viewing Experiences: Active-Shutter 3d TV vs. Passive-Polarized 3DTV

    Yu-Chi Tai, PhD; Leigh Gongaware, BS; Andrew Reder, BS; John Hayes, PhD; James Sheedy, OD, PhD

    2014-01-01

    Background: Stereoscopic 3D TV systems convey depth perception to the viewer by delivering to each eye separately filtered images that represent two slightly different perspectives. Currently two primary technologies are used in S3D televisions: Active shutter systems, which use alternate frame sequencing to deliver a full-frame image to one eye at a time at a fast refresh rate, and Passive polarized systems, which superimpose the two half-frame left-eye and right-eye images at th...

  3. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  4. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  5. Dynamic diffraction-limited light-coupling of 3D-maneuvered wave-guided optical waveguides

    Villangca, Mark Jayson; Bañas, Andrew Rafael; Palima, Darwin;

    2014-01-01

    We have previously proposed and demonstrated the targeted-light delivery capability of wave-guided optical waveguides (WOWs). As the WOWs are maneuvered in 3D space, it is important to maintain efficient light coupling through the waveguides within their operating volume. We propose the use of...

  6. Full data utilization in PVI [positron volume imaging] using the 3D radon transform

    An algorithm is described for three-dimensional (3D) image reconstruction in positron volume imaging (PVI) using the inversion of the 3D radon transform (RT) for a truncated cylindrical detector geometry. This single-pass reconstruction image has better statistical noise properties than images formed by RT inversion from complete XT projections, but only for some detector geometries is it significantly better. Monte Carlo simulations were used to study the statistical noise in images reconstructed using the new algorithm. The inherent difference in the axial versus the transaxial statistical noise in images reconstructed from truncated detectors is noted and is found to increase by including oblique events with this new algorithm. (author)

  7. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  8. Non-invasive single-shot 3D imaging through a scattering layer using speckle interferometry

    Somkuwar, Atul S; R., Vinu; Park, Yongkeun; Singh, Rakesh Kumar

    2015-01-01

    Optical imaging through complex scattering media is one of the major technical challenges with important applications in many research fields, ranging from biomedical imaging, astronomical telescopy, and spatially multiplex optical communications. Although various approaches for imaging though turbid layer have been recently proposed, they had been limited to two-dimensional imaging. Here we propose and experimentally demonstrate an approach for three-dimensional single-shot imaging of objects hidden behind an opaque scattering layer. We demonstrate that under suitable conditions, it is possible to perform the 3D imaging to reconstruct the complex amplitude of objects situated at different depths.

  9. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  10. A comparison of 2D and 3D digital image correlation for a membrane under inflation

    Murienne, Barbara J.; Nguyen, Thao D.

    2016-02-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions.

  11. Measurement of facial soft tissues thickness using 3D computed tomographic images

    To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. There were no statistically significant differences between the direct measurements and those using the 3D images(p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissue thickness more easily in forensic science and anthropology

  12. 3D nonrigid medical image registration using a new information theoretic measure

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  13. 3D nonrigid medical image registration using a new information theoretic measure

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen–Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy. (paper)

  14. 2D and 3D visualization methods of endoscopic panoramic bladder images

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  15. Lensfree Optical Tomography for High-Throughput 3D Imaging on a Chip

    ISIKMAN, SERHAN OMER

    2012-01-01

    Light microscopes provide us with the key to observe objects that are orders of magnitude smaller than what the unaided eye can see. Therefore, microscopy has been the cornerstone of science and medicine for centuries. Recently, optical microscopy has seen a growing interest in developing three-dimensional (3D) imaging techniques that enable sectional imaging of biological specimen. These imaging techniques, however, are generally quite complex, bulky and expensive in addition to having a lim...

  16. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  17. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  18. 3D high spectral and spatial resolution imaging of ex vivo mouse brain

    Purpose: Widely used MRI methods show brain morphology both in vivo and ex vivo at very high resolution. Many of these methods (e.g., T2*-weighted imaging, phase-sensitive imaging, or susceptibility-weighted imaging) are sensitive to local magnetic susceptibility gradients produced by subtle variations in tissue composition. However, the spectral resolution of commonly used methods is limited to maintain reasonable run-time combined with very high spatial resolution. Here, the authors report on data acquisition at increased spectral resolution, with 3-dimensional high spectral and spatial resolution MRI, in order to analyze subtle variations in water proton resonance frequency and lineshape that reflect local anatomy. The resulting information compliments previous studies based on T2* and resonance frequency. Methods: The proton free induction decay was sampled at high resolution and Fourier transformed to produce a high-resolution water spectrum for each image voxel in a 3D volume. Data were acquired using a multigradient echo pulse sequence (i.e., echo-planar spectroscopic imaging) with a spatial resolution of 50 × 50 × 70 μm3 and spectral resolution of 3.5 Hz. Data were analyzed in the spectral domain, and images were produced from the various Fourier components of the water resonance. This allowed precise measurement of local variations in water resonance frequency and lineshape, at the expense of significantly increased run time (16–24 h). Results: High contrast T2*-weighted images were produced from the peak of the water resonance (peak height image), revealing a high degree of anatomical detail, specifically in the hippocampus and cerebellum. In images produced from Fourier components of the water resonance at −7.0 Hz from the peak, the contrast between deep white matter tracts and the surrounding tissue is the reverse of the contrast in water peak height images. This indicates the presence of a shoulder in the water resonance that is not present at

  19. 3D high spectral and spatial resolution imaging of ex vivo mouse brain

    Foxley, Sean, E-mail: sean.foxley@ndcn.ox.ac.uk; Karczmar, Gregory S. [Department of Radiology, University of Chicago, Chicago, Illinois 60637 (United States); Domowicz, Miriam [Department of Pediatrics, University of Chicago, Chicago, Illinois 60637 (United States); Schwartz, Nancy [Department of Pediatrics, Department of Biochemistry and Molecular Biology, University of Chicago, Chicago, Illinois 60637 (United States)

    2015-03-15

    Purpose: Widely used MRI methods show brain morphology both in vivo and ex vivo at very high resolution. Many of these methods (e.g., T{sub 2}{sup *}-weighted imaging, phase-sensitive imaging, or susceptibility-weighted imaging) are sensitive to local magnetic susceptibility gradients produced by subtle variations in tissue composition. However, the spectral resolution of commonly used methods is limited to maintain reasonable run-time combined with very high spatial resolution. Here, the authors report on data acquisition at increased spectral resolution, with 3-dimensional high spectral and spatial resolution MRI, in order to analyze subtle variations in water proton resonance frequency and lineshape that reflect local anatomy. The resulting information compliments previous studies based on T{sub 2}{sup *} and resonance frequency. Methods: The proton free induction decay was sampled at high resolution and Fourier transformed to produce a high-resolution water spectrum for each image voxel in a 3D volume. Data were acquired using a multigradient echo pulse sequence (i.e., echo-planar spectroscopic imaging) with a spatial resolution of 50 × 50 × 70 μm{sup 3} and spectral resolution of 3.5 Hz. Data were analyzed in the spectral domain, and images were produced from the various Fourier components of the water resonance. This allowed precise measurement of local variations in water resonance frequency and lineshape, at the expense of significantly increased run time (16–24 h). Results: High contrast T{sub 2}{sup *}-weighted images were produced from the peak of the water resonance (peak height image), revealing a high degree of anatomical detail, specifically in the hippocampus and cerebellum. In images produced from Fourier components of the water resonance at −7.0 Hz from the peak, the contrast between deep white matter tracts and the surrounding tissue is the reverse of the contrast in water peak height images. This indicates the presence of a shoulder in

  20. Computer assisted determination of acetabular cup orientation using 2D-3D image registration

    2D-3D image-based registration methods have been developed to measure acetabular cup orientation after total hip arthroplasty (THA). These methods require registration of both the prosthesis and the CT images to 2D radiographs and compute implant position with respect to a reference. The application of these methods is limited in clinical practice due to two limitations: (1) the requirement of a computer-aided design (CAD) model of the prosthesis, which may be unavailable due to the proprietary concerns of the manufacturer, and (2) the requirement of either multiple radiographs or radiograph-specific calibration, usually unavailable for retrospective studies. In this paper, we propose a new method to address these limitations. A new formulation for determination of post-operative cup orientation, which couples a radiographic measurement with 2D-3D image matching, was developed. In our formulation, the radiographic measurement can be obtained with known methods so that the challenge lies in the 2D-3D image matching. To solve this problem, a hybrid 2D-3D registration scheme combining a landmark-to-ray 2D-3D alignment with a robust intensity-based 2D-3D registration was used. The hybrid 2D-3D registration scheme allows computing both the post-operative cup orientation with respect to an anatomical reference and the pelvic tilt and rotation with respect to the X-ray imaging table/plate. The method was validated using 2D adult cadaver hips. Using the hybrid 2D-3D registration scheme, our method showed a mean accuracy of 1.0 ± 0.7 (range from 0.1 to 2.0 ) for inclination and 1.7 ± 1.2 (range from 0.0 to 3.9 ) for anteversion, taking the measurements from post-operative CT images as ground truths. Our new solution formulation and the hybrid 2D-3D registration scheme facilitate estimation of post-operative cup orientation and measurement of pelvic tilt and rotation. (orig.)

  1. Computer assisted determination of acetabular cup orientation using 2D-3D image registration

    Zheng, Guoyan; Zhang, Xuan [University of Bern, Institute for Surgical Technology and Biomechanics, Bern (Switzerland)

    2010-09-15

    2D-3D image-based registration methods have been developed to measure acetabular cup orientation after total hip arthroplasty (THA). These methods require registration of both the prosthesis and the CT images to 2D radiographs and compute implant position with respect to a reference. The application of these methods is limited in clinical practice due to two limitations: (1) the requirement of a computer-aided design (CAD) model of the prosthesis, which may be unavailable due to the proprietary concerns of the manufacturer, and (2) the requirement of either multiple radiographs or radiograph-specific calibration, usually unavailable for retrospective studies. In this paper, we propose a new method to address these limitations. A new formulation for determination of post-operative cup orientation, which couples a radiographic measurement with 2D-3D image matching, was developed. In our formulation, the radiographic measurement can be obtained with known methods so that the challenge lies in the 2D-3D image matching. To solve this problem, a hybrid 2D-3D registration scheme combining a landmark-to-ray 2D-3D alignment with a robust intensity-based 2D-3D registration was used. The hybrid 2D-3D registration scheme allows computing both the post-operative cup orientation with respect to an anatomical reference and the pelvic tilt and rotation with respect to the X-ray imaging table/plate. The method was validated using 2D adult cadaver hips. Using the hybrid 2D-3D registration scheme, our method showed a mean accuracy of 1.0 {+-} 0.7 (range from 0.1 to 2.0 ) for inclination and 1.7 {+-} 1.2 (range from 0.0 to 3.9 ) for anteversion, taking the measurements from post-operative CT images as ground truths. Our new solution formulation and the hybrid 2D-3D registration scheme facilitate estimation of post-operative cup orientation and measurement of pelvic tilt and rotation. (orig.)

  2. Note: An improved 3D imaging system for electron-electron coincidence measurements

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen, E-mail: wli@chem.wayne.edu [Department of Chemistry, Wayne State University, Detroit, Michigan 48202 (United States)

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  3. Correlative 3D imaging of Whole Mammalian Cells with Light and Electron Microscopy

    Murphy, Gavin E.; Narayan, Kedar; Lowekamp, Bradley C.; Hartnell, Lisa M.; Heymann, Jurgen A. W.; Fu, Jing; Subramaniam, Sriram

    2011-01-01

    We report methodological advances that extend the current capabilities of ion-abrasion scanning electron microscopy (IA–SEM), also known as focused ion beam scanning electron microscopy, a newly emerging technology for high resolution imaging of large biological specimens in 3D. We establish protocols that enable the routine generation of 3D image stacks of entire plastic-embedded mammalian cells by IA-SEM at resolutions of ~10 to 20 nm at high contrast and with minimal artifacts from the foc...

  4. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingl...

  5. A new method of 3D scene recognition from still images

    Zheng, Li-ming; Wang, Xing-song

    2014-04-01

    Most methods of monocular visual three dimensional (3D) scene recognition involve supervised machine learning. However, these methods often rely on prior knowledge. Specifically, they learn the image scene as part of a training dataset. For this reason, when the sampling equipment or scene is changed, monocular visual 3D scene recognition may fail. To cope with this problem, a new method of unsupervised learning for monocular visual 3D scene recognition is here proposed. First, the image is made using superpixel segmentation based on the CIELAB color space values L, a, and b and on the coordinate values x and y of pixels, forming a superpixel image with a specific density. Second, a spectral clustering algorithm based on the superpixels' color characteristics and neighboring relationships was used to reduce the dimensions of the superpixel image. Third, the fuzzy distribution density functions representing sky, ground, and façade are multiplied with the segment pixels, where the expectations of these segments are obtained. A preliminary classification of sky, ground, and façade is generated in this way. Fourth, the most accurate classification images of sky, ground, and façade were extracted through the tier-1 wavelet sampling and Manhattan direction feature. Finally, a depth perception map is generated based on the pinhole imaging model and the linear perspective information of ground surface. Here, 400 images of Make3D Image data from the Cornell University website were used to test the algorithm. The experimental results showed that this unsupervised learning method provides a more effective monocular visual 3D scene recognition model than other methods.

  6. Web tools for large-scale 3D biological images and atlases

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  7. A neural network based 3D/3D image registration quality evaluator for the head-and-neck patient setup in the absence of a ground truth

    Purpose: To develop a neural network based registration quality evaluator (RQE) that can identify unsuccessful 3D/3D image registrations for the head-and-neck patient setup in radiotherapy. Methods: A two-layer feed-forward neural network was used as a RQE to classify 3D/3D rigid registration solutions as successful or unsuccessful based on the features of the similarity surface near the point-of-solution. The supervised training and test data sets were generated by rigidly registering daily cone-beam CTs to the treatment planning fan-beam CTs of six patients with head-and-neck tumors. Two different similarity metrics (mutual information and mean-squared intensity difference) and two different types of image content (entire image versus bony landmarks) were used. The best solution for each registration pair was selected from 50 optimizing attempts that differed only by the initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error threshold to determine whether that solution was successful or not. The supervised training was then used to train the RQE. The performance of the RQE was evaluated using the test data set that consisted of registration results that were not used in training. Results: The RQE constructed using the mutual information had very good performance when tested using the test data sets, yielding the sensitivity, the specificity, the positive predictive value, and the negative predictive value in the ranges of 0.960-1.000, 0.993-1.000, 0.983-1.000, and 0.909-1.000, respectively. Adding a RQE into a conventional 3D/3D image registration system incurs only about 10%-20% increase of the overall processing time. Conclusions: The authors' patient study has demonstrated very good performance of the proposed RQE when used with the mutual information in identifying unsuccessful 3D/3D registrations for daily patient setup. The classifier had

  8. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  9. 3-D reconstruction of neurons from multichannel confocal laser scanning image series.

    Wouterlood, Floris G

    2014-01-01

    A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. PMID:24723320

  10. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM. PMID:27277277

  11. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  12. Three-dimensional imaging using computer-generated holograms synthesized from 3-D Fourier spectra

    Yatagai, Toyohiko; Miura, Ken-ichi; Sando, Yusuke; Itoh, Masahide [University of Tsukba, Institute of Applied Physics, Tennoudai 1-1-1, Tsukuba, Ibaraki 305-8571 (Japan)], E-mail: yatagai@cc.utsunomiya-u.ac.jp

    2008-11-01

    Computer-generated holograms(CGHs) synthesized from projection images of real existing objects are considered. A series of projection images are recorded both vertically and horizontally with an incoherent light source and a color CCD. According to the principles of computer tomography(CT), the 3-D Fourier spectrum is calculated from several projection images of objects and the Fresnel CGH is synthesized using a part of the 3-D Fourier spectrum. This method has following advantages. At first, no-blur reconstructed images in any direction are obtained owing to two-dimensionally scanning in recording. Secondarily, since not interference fringes but simple projection images of objects are recorded, a coherent light source is not necessary. Moreover, when a color CCD is used in recording, it is easily possible to record and reconstruct colorful objects. Finally, we demonstrate reconstruction of biological objects.

  13. Three-dimensional imaging using computer-generated holograms synthesized from 3-D Fourier spectra

    Computer-generated holograms(CGHs) synthesized from projection images of real existing objects are considered. A series of projection images are recorded both vertically and horizontally with an incoherent light source and a color CCD. According to the principles of computer tomography(CT), the 3-D Fourier spectrum is calculated from several projection images of objects and the Fresnel CGH is synthesized using a part of the 3-D Fourier spectrum. This method has following advantages. At first, no-blur reconstructed images in any direction are obtained owing to two-dimensionally scanning in recording. Secondarily, since not interference fringes but simple projection images of objects are recorded, a coherent light source is not necessary. Moreover, when a color CCD is used in recording, it is easily possible to record and reconstruct colorful objects. Finally, we demonstrate reconstruction of biological objects.

  14. A web-based solution for 3D medical image visualization

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  15. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  16. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  17. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  18. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  19. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    Ester Martinez-Martin

    2014-01-01

    Full Text Available Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity generated from the egocentric representation of the visual information (image coordinates. In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching. The approach’s performance is evaluated through experiments on both simulated and real data.

  20. An active system for visually-guided reaching in 3D across binocular fixations.

    Martinez-Martin, Ester; del Pobil, Angel P; Chessa, Manuela; Solari, Fabio; Sabatini, Silvio P

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  1. Automated 3D-Objectdocumentation on the Base of an Image Set

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  2. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  3. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  4. Joint Multichannel Motion Compensation Method for MIMO SAR 3D Imaging

    Ze-min Yang

    2015-01-01

    Full Text Available The multiple-input-multiple-output (MIMO synthetic aperture radar (SAR system with a linear antenna array can obtain 3D resolution. In practice, it suffers from both the translational motion errors and the rotational motion errors. Conventional single-channel motion compensation methods could be used to compensate the motion errors channel by channel. However, this method might not be accurate enough for all the channels. What is more, the single-channel compensation may break the coherence among channels, which would cause defocusing and false targets. In this paper, both the translational motion errors and the rotational motion errors are discussed, and a joint multichannel motion compensation method is proposed for MIMO SAR 3D imaging. It is demonstrated through simulations that the proposed method exceeds the conventional methods in accuracy. And the final MIMO SAR 3D imaging simulation confirms the validity of the proposed algorithm.

  5. Computer-Assisted Hepatocellular Carcinoma Ablation Planning Based on 3-D Ultrasound Imaging.

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Guan, Peishan; Li, Liu-Jun; Zheng, Rongqin

    2016-08-01

    To evaluate computer-assisted hepatocellular carcinoma (HCC) ablation planning based on 3-D ultrasound, 3-D ultrasound images of 60 HCC lesions from 58 patients were obtained and transferred to a research toolkit. Compared with virtual manual ablation planning (MAP), virtual computer-assisted ablation planning (CAP) consumed less time and needle insertion numbers and exhibited a higher rate of complete tumor coverage and lower rate of critical structure injury. In MAP, junior operators used less time, but had more critical structure injury than senior operators. For large lesions, CAP performed better than MAP. For lesions near critical structures, CAP resulted in better outcomes than MAP. Compared with MAP, CAP based on 3-D ultrasound imaging was more effective and achieved a higher rate of complete tumor coverage and a lower rate of critical structure injury; it is especially useful for junior operators and with large lesions, and lesions near critical structures. PMID:27126243

  6. X-ray scattering in the elastic regime as source for 3D imaging reconstruction technique

    Kocifaj, Miroslav; Mego, Michal

    2015-11-01

    X-ray beams propagate across a target object before they are projected onto a regularly spaced array of detectors to produce a routine X-ray image. A 3D attenuation coefficient distribution is obtained by tomographic reconstruction where scattering is usually regarded as a source of parasitic signals which increase the level of electromagnetic noise that is difficult to eliminate. However, the elastically scattered radiation could be a valuable source of information, because it can provide a 3D topology of electron densities and thus contribute significantly to the optical characterization of the scanned object. The scattering and attenuation data form a complementary base for concurrent retrieval of both electron density and attenuation coefficient distributions. In this paper we developed the 3D reconstruction method that combines both data inputs and produces better image resolution compared to traditional technology.

  7. Fast isotropic banding-free bSSFP imaging using 3D dynamically phase-cycled radial bSSFP (3D DYPR-SSFP)

    Aims: Dynamically phase-cycled radial balanced steady-state free precession (DYPR-SSFP) is a method for efficient banding artifact removal in bSSFP imaging. Based on a varying radiofrequency (RF) phase-increment in combination with a radial trajectory, DYPR-SSFP allows obtaining a banding-free image out of a single acquired k-space. The purpose of this work is to present an extension of this technique, enabling fast three-dimensional isotropic banding-free bSSFP imaging. Methods: While banding artifact removal with DYPR-SSFP relies on the applied dynamic phase-cycle, this aspect can lead to artifacts, at least when the number of acquired projections lies below a certain limit. However, by using a 3D radial trajectory with quasi-random view ordering for image acquisition, this problem is intrinsically solved, enabling 3D DYPR-SSFP imaging at or even below the Nyquist criterion. The approach is validated for brain and knee imaging at 3 Tesla. Results: Volumetric, banding-free images were obtained in clinically acceptable scan times with an isotropic resolution up to 0.56 mm. Conclusion: The combination of DYPR-SSFP with a 3D radial trajectory allows banding-free isotropic volumetric bSSFP imaging with no expense of scan time. Therefore, this is a promising candidate for