WorldWideScience

Sample records for 3d image guided

  1. 3D Image-Guided Automatic Pipette Positioning for Single Cell Experiments in vivo

    Brian Long; Lu Li; Ulf Knoblich; Hongkui Zeng; Hanchuan Peng

    2015-01-01

    We report a method to facilitate single cell, image-guided experiments including in vivo electrophysiology and electroporation. Our method combines 3D image data acquisition, visualization and on-line image analysis with precise control of physical probes such as electrophysiology microelectrodes in brain tissue in vivo. Adaptive pipette positioning provides a platform for future advances in automated, single cell in vivo experiments.

  2. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). (paper)

  3. Hands-on guide for 3D image creation for geological purposes

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    -cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  4. 3D-image-guided high-dose-rate intracavitary brachytherapy for salvage treatment of locally persistent nasopharyngeal carcinoma

    Ren, Yu-Feng; Cao, Xin-Ping; Xu, Jia; Ye, Wei-Jun; Gao, Yuan-Hong; Teh, Bin S.; Wen, Bi-Xiu

    2013-01-01

    Background To evaluate the therapeutic benefit of 3D-image-guided high-dose-rate intracavitary brachytherapy (3D-image-guided HDR-BT) used as a salvage treatment of intensity modulated radiation therapy (IMRT) in patients with locally persistent nasopharyngeal carcinoma (NPC). Methods Thirty-two patients with locally persistent NPC after full dose of IMRT were evaluated retrospectively. 3D-image-guided HDR-BT treatment plan was performed on a 3D treatment planning system (PLATO BPS 14.2). The...

  5. A small animal image guided irradiation system study using 3D dosimeters

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  6. A small animal image guided irradiation system study using 3D dosimeters

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies

  7. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  8. 3D-image-guided high-dose-rate intracavitary brachytherapy for salvage treatment of locally persistent nasopharyngeal carcinoma

    To evaluate the therapeutic benefit of 3D-image-guided high-dose-rate intracavitary brachytherapy (3D-image-guided HDR-BT) used as a salvage treatment of intensity modulated radiation therapy (IMRT) in patients with locally persistent nasopharyngeal carcinoma (NPC). Thirty-two patients with locally persistent NPC after full dose of IMRT were evaluated retrospectively. 3D-image-guided HDR-BT treatment plan was performed on a 3D treatment planning system (PLATO BPS 14.2). The median dose of 16 Gy was delivered to the 100% isodose line of the Gross Tumor Volume. The whole procedure was well tolerated under local anesthesia. The actuarial 5-y local control rate for 3D-image-guided HDR-BT was 93.8%, patients with early-T stage at initial diagnosis had 100% local control rate. The 5-y actuarial progression-free survival and distant metastasis-free survival rate were 78.1%, 87.5%. One patient developed and died of lung metastases. The 5-y actuarial overall survival rate was 96.9%. Our results showed that 3D-image-guided HDR-BT would provide excellent local control as a salvage therapeutic modality to IMRT for patients with locally persistent disease at initial diagnosis of early-T stage NPC

  9. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  10. Optimizing nonrigid registration performance between volumetric true 3D ultrasound images in image-guided neurosurgery

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2011-03-01

    Compensating for brain shift as surgery progresses is important to ensure sufficient accuracy in patient-to-image registration in the operating room (OR) for reliable neuronavigation. Ultrasound has emerged as an important and practical imaging technique for brain shift compensation either by itself or through computational modeling that estimates whole-brain deformation. Using volumetric true 3D ultrasound (3DUS), it is possible to nonrigidly (e.g., based on B-splines) register two temporally different 3DUS images directly to generate feature displacement maps for data assimilation in the biomechanical model. Because of a large amount of data and number of degrees-of-freedom (DOFs) involved, however, a significant computational cost may be required that can adversely influence the clinical feasibility of the technique for efficiently generating model-updated MR (uMR) in the OR. This paper parametrically investigates three B-splines registration parameters and their influence on the computational cost and registration accuracy: number of grid nodes along each direction, floating image volume down-sampling rate, and number of iterations. A simulated rigid body displacement field was employed as a ground-truth against which the accuracy of displacements generated from the B-splines nonrigid registration was compared. A set of optimal parameters was then determined empirically that result in a registration computational cost of less than 1 min and a sub-millimetric accuracy in displacement measurement. These resulting parameters were further applied to a clinical surgery case to demonstrate their practical use. Our results indicate that the optimal set of parameters result in sufficient accuracy and computational efficiency in model computation, which is important for future application of the overall biomechanical modeling to generate uMR for image-guidance in the OR.

  11. A simulation technique for 3D MR-guided acoustic radiation force imaging

    Payne, Allison, E-mail: apayne@ucair.med.utah.edu [Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, Utah 84112 (United States); Bever, Josh de [Department of Computer Science, University of Utah, Salt Lake City, Utah 84112 (United States); Farrer, Alexis [Department of Bioengineering, University of Utah, Salt Lake City, Utah 84112 (United States); Coats, Brittany [Department of Mechanical Engineering, University of Utah, Salt Lake City, Utah 84112 (United States); Parker, Dennis L. [Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, Utah 84108 (United States); Christensen, Douglas A. [Department of Bioengineering, University of Utah, Salt Lake City, Utah 84112 and Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, Utah 84112 (United States)

    2015-02-15

    Purpose: In magnetic resonance-guided focused ultrasound (MRgFUS) therapies, the in situ characterization of the focal spot location and quality is critical. MR acoustic radiation force imaging (MR-ARFI) is a technique that measures the tissue displacement caused by the radiation force exerted by the ultrasound beam. This work presents a new technique to model the displacements caused by the radiation force of an ultrasound beam in a homogeneous tissue model. Methods: When a steady-state point-source force acts internally in an infinite homogeneous medium, the displacement of the material in all directions is given by the Somigliana elastostatic tensor. The radiation force field, which is caused by absorption and reflection of the incident ultrasound intensity pattern, will be spatially distributed, and the tensor formulation takes the form of a convolution of a 3D Green’s function with the force field. The dynamic accumulation of MR phase during the ultrasound pulse can be theoretically accounted for through a time-of-arrival weighting of the Green’s function. This theoretical model was evaluated experimentally in gelatin phantoms of varied stiffness (125-, 175-, and 250-bloom). The acoustic and mechanical properties of the phantoms used as parameters of the model were measured using independent techniques. Displacements at focal depths of 30- and 45-mm in the phantoms were measured by a 3D spin echo MR-ARFI segmented-EPI sequence. Results: The simulated displacements agreed with the MR-ARFI measured displacements for all bloom values and focal depths with a normalized RMS difference of 0.055 (range 0.028–0.12). The displacement magnitude decreased and the displacement pattern broadened with increased bloom value for both focal depths, as predicted by the theory. Conclusions: A new technique that models the displacements caused by the radiation force of an ultrasound beam in a homogeneous tissue model theory has been rigorously validated through comparison

  12. A novel 3D volumetric voxel registration technique for volume-view-guided image registration of multiple imaging modalities

    Purpose: To provide more clinically useful image registration with improved accuracy and reduced time, a novel technique of three-dimensional (3D) volumetric voxel registration of multimodality images is developed. Methods and Materials: This technique can register up to four concurrent images from multimodalities with volume view guidance. Various visualization effects can be applied, facilitating global and internal voxel registration. Fourteen computed tomography/magnetic resonance (CT/MR) image sets and two computed tomography/positron emission tomography (CT/PET) image sets are used. For comparison, an automatic registration technique using maximization of mutual information (MMI) and a three-orthogonal-planar (3P) registration technique are used. Results: Visually sensitive registration criteria for CT/MR and CT/PET have been established, including the homogeneity of color distribution. Based on the registration results of 14 CT/MR images, the 3D voxel technique is in excellent agreement with the automatic MMI technique and is indicatory of a global positioning error (defined as the means and standard deviations of the error distribution) using the 3P pixel technique: 1.8 deg ± 1.2 deg in rotation and 2.0 ± 1.3 (voxel unit) in translation. To the best of our knowledge, this is the first time that such positioning error has been addressed. Conclusion: This novel 3D voxel technique establishes volume-view-guided image registration of up to four modalities. It improves registration accuracy with reduced time, compared with the 3P pixel technique. This article suggests that any interactive and automatic registration should be safeguarded using the 3D voxel technique

  13. Technical Note: Rapid prototyping of 3D grid arrays for image guided therapy quality assurance

    Kittle, David; Holshouser, Barbara; Slater, James M.; Guenther, Bob D.; Pitsianis, Nikos P.; Pearlstein, Robert D. [Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 (United States); Department of Radiology, Loma Linda University Medical Center, Loma Linda, California 92354 (United States); Department of Radiation Medicine, Loma Linda University, Loma Linda, California 92354 (United States); Department of Physics, Duke University, Durham, North Carolina 27708 (United States); Department of Electrical and Computer Engineering and Department of Computer Science, Duke University, Durham, North Carolina 27708 (United States); Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 and Department of Surgery-Neurosurgery, Duke University and Medical Center, Durham, North Carolina 27710 (United States)

    2008-12-15

    Three dimensional grid phantoms offer a number of advantages for measuring imaging related spatial inaccuracies for image guided surgery and radiotherapy. The authors examined the use of rapid prototyping technology for directly fabricating 3D grid phantoms from CAD drawings. We tested three different fabrication process materials, photopolymer jet with acrylic resin (PJ/AR), selective laser sintering with polyamide (SLS/P), and fused deposition modeling with acrylonitrile butadiene styrene (FDM/ABS). The test objects consisted of rectangular arrays of control points formed by the intersections of posts and struts (2 mm rectangular cross section) and spaced 8 mm apart in the x, y, and z directions. The PJ/AR phantom expanded after immersion in water which resulted in permanent warping of the structure. The surface of the FDM/ABS grid exhibited a regular pattern of depressions and ridges from the extrusion process. SLS/P showed the best combination of build accuracy, surface finish, and stability. Based on these findings, a grid phantom for assessing machine-dependent and frame-induced MR spatial distortions was fabricated to be used for quality assurance in stereotactic neurosurgical and radiotherapy procedures. The spatial uniformity of the SLS/P grid control point array was determined by CT imaging (0.6x0.6x0.625 mm{sup 3} resolution) and found suitable for the application, with over 97.5% of the control points located within 0.3 mm of the position specified in CAD drawing and none of the points off by more than 0.4 mm. Rapid prototyping is a flexible and cost effective alternative for development of customized grid phantoms for medical physics quality assurance.

  14. A fast, accurate, and automatic 2D-3D image registration for image-guided cranial radiosurgery

    The authors developed a fast and accurate two-dimensional (2D)-three-dimensional (3D) image registration method to perform precise initial patient setup and frequent detection and correction for patient movement during image-guided cranial radiosurgery treatment. In this method, an approximate geometric relationship is first established to decompose a 3D rigid transformation in the 3D patient coordinate into in-plane transformations and out-of-plane rotations in two orthogonal 2D projections. Digitally reconstructed radiographs are generated offline from a preoperative computed tomography volume prior to treatment and used as the reference for patient position. A multiphase framework is designed to register the digitally reconstructed radiographs with the x-ray images periodically acquired during patient setup and treatment. The registration in each projection is performed independently; the results in the two projections are then combined and converted to a 3D rigid transformation by 2D-3D geometric backprojection. The in-plane transformation and the out-of-plane rotation are estimated using different search methods, including multiresolution matching, steepest descent minimization, and one-dimensional search. Two similarity measures, optimized pattern intensity and sum of squared difference, are applied at different registration phases to optimize accuracy and computation speed. Various experiments on an anthropomorphic head-and-neck phantom showed that, using fiducial registration as a gold standard, the registration errors were 0.33±0.16 mm (s.d.) in overall translation and 0.29 deg. ±0.11 deg. (s.d.) in overall rotation. The total targeting errors were 0.34±0.16 mm (s.d.), 0.40±0.2 mm (s.d.), and 0.51±0.26 mm (s.d.) for the targets at the distances of 2, 6, and 10 cm from the rotation center, respectively. The computation time was less than 3 s on a computer with an Intel Pentium 3.0 GHz dual processor

  15. Metabolic approach for tumor delineation in glioma surgery: 3D MR spectroscopy image-guided resection.

    Zhang, Jie; Zhuang, Dong-Xiao; Yao, Cheng-Jun; Lin, Ching-Po; Wang, Tian-Liang; Qin, Zhi-Yong; Wu, Jin-Song

    2016-06-01

    OBJECT The extent of resection is one of the most essential factors that influence the outcomes of glioma resection. However, conventional structural imaging has failed to accurately delineate glioma margins because of tumor cell infiltration. Three-dimensional proton MR spectroscopy ((1)H-MRS) can provide metabolic information and has been used in preoperative tumor differentiation, grading, and radiotherapy planning. Resection based on glioma metabolism information may provide for a more extensive resection and yield better outcomes for glioma patients. In this study, the authors attempt to integrate 3D (1)H-MRS into neuronavigation and assess the feasibility and validity of metabolically based glioma resection. METHODS Choline (Cho)-N-acetylaspartate (NAA) index (CNI) maps were calculated and integrated into neuronavigation. The CNI thresholds were quantitatively analyzed and compared with structural MRI studies. Glioma resections were performed under 3D (1)H-MRS guidance. Volumetric analyses were performed for metabolic and structural images from a low-grade glioma (LGG) group and high-grade glioma (HGG) group. Magnetic resonance imaging and neurological assessments were performed immediately after surgery and 1 year after tumor resection. RESULTS Fifteen eligible patients with primary cerebral gliomas were included in this study. Three-dimensional (1)H-MRS maps were successfully coregistered with structural images and integrated into navigational system. Volumetric analyses showed that the differences between the metabolic volumes with different CNI thresholds were statistically significant (p MRS maps and intraoperative navigation for glioma margin delineation. Optimum CNI thresholds were applied for both LGGs and HGGs to achieve resection. The results indicated that 3D (1)H-MRS can be integrated with structural imaging to provide better outcomes for glioma resection. PMID:26636387

  16. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan. Advances and obstacles

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. (author)

  17. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan: advances and obstacles.

    Ohno, Tatsuya; Toita, Takafumi; Tsujino, Kayoko; Uchida, Nobue; Hatano, Kazuo; Nishimura, Tetsuo; Ishikura, Satoshi

    2015-11-01

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. PMID:26265660

  18. Treatment Planning for Image-Guided Neuro-Vascular Interventions Using Patient-Specific 3D Printed Phantoms

    Russ, M; O’Hara, R.; Setlur Nagesh, S.V.; Mokin, M.; Jimenez, C.; Siddiqui, A.; Bednarek, D.; Rudin, S; C. Ionita

    2015-01-01

    Minimally invasive endovascular image-guided interventions (EIGIs) are the preferred procedures for treatment of a wide range of vascular disorders. Despite benefits including reduced trauma and recovery time, EIGIs have their own challenges. Remote catheter actuation and challenging anatomical morphology may lead to erroneous endovascular device selections, delays or even complications such as vessel injury. EIGI planning using 3D phantoms would allow interventionists to become familiarized ...

  19. Curve-based 2D-3D registration of coronary vessels for image guided procedure

    Duong, Luc; Liao, Rui; Sundar, Hari; Tailhades, Benoit; Meyer, Andreas; Xu, Chenyang

    2009-02-01

    3D roadmap provided by pre-operative volumetric data that is aligned with fluoroscopy helps visualization and navigation in Interventional Cardiology (IC), especially when contrast agent-injection used to highlight coronary vessels cannot be systematically used during the whole procedure, or when there is low visibility in fluoroscopy for partially or totally occluded vessels. The main contribution of this work is to register pre-operative volumetric data with intraoperative fluoroscopy for specific vessel(s) occurring during the procedure, even without contrast agent injection, to provide a useful 3D roadmap. In addition, this study incorporates automatic ECG gating for cardiac motion. Respiratory motion is identified by rigid body registration of the vessels. The coronary vessels are first segmented from a multislice computed tomography (MSCT) volume and correspondent vessel segments are identified on a single gated 2D fluoroscopic frame. Registration can be explicitly constrained using one or multiple branches of a contrast-enhanced vessel tree or the outline of guide wire used to navigate during the procedure. Finally, the alignment problem is solved by Iterative Closest Point (ICP) algorithm. To be computationally efficient, a distance transform is computed from the 2D identification of each vessel such that distance is zero on the centerline of the vessel and increases away from the centerline. Quantitative results were obtained by comparing the registration of random poses and a ground truth alignment for 5 datasets. We conclude that the proposed method is promising for accurate 2D-3D registration, even for difficult cases of occluded vessel without injection of contrast agent.

  20. SU-E-T-376: 3-D Commissioning for An Image-Guided Small Animal Micro- Irradiation Platform

    Purpose: A 3-D radiochromic plastic dosimeter has been used to cross-test the isocentricity of a high resolution image-guided small animal microirradiation platform. In this platform, the mouse stage rotating for cone beam CT imaging is perpendicular to the gantry rotation for sub-millimeter radiation delivery. A 3-D dosimeter can be used to verify both imaging and irradiation coordinates. Methods: A 3-D dosimeter and optical CT scanner were used in this study. In the platform, both mouse stage and gantry can rotate 360° with rotation axis perpendicular to each other. Isocentricity and coincidence of mouse stage and gantry rotations were evaluated using star patterns. A 3-D dosimeter was placed on mouse stage with center at platform isocenter approximately. For CBCT isocentricity, with gantry moved to 90°, the mouse stage rotated horizontally while the x-ray was delivered to the dosimeter at certain angles. For irradiation isocentricity, the gantry rotated 360° to deliver beams to the dosimeter at certain angles for star patterns. The uncertainties and agreement of both CBCT and irradiation isocenters can be determined from the star patterns. Both procedures were repeated 3 times using 3 dosimeters to determine short-term reproducibility. Finally, dosimeters were scanned using optical CT scanner to obtain the results. Results: The gantry isocentricity is 0.9 ± 0.1 mm and mouse stage rotation isocentricity is about 0.91 ± 0.11 mm. Agreement between the measured isocenters of irradiation and imaging coordinates was determined. The short-term reproducibility test yielded 0.5 ± 0.1 mm between the imaging isocenter and the irradiation isocenter, with a maximum displacement of 0.7 ± 0.1 mm. Conclusion: The 3-D dosimeter can be very useful in precise verification of targeting for a small animal irradiation research. In addition, a single 3-D dosimeter can provide information in both geometric and dosimetric uncertainty, which is crucial for translational studies

  1. SU-E-T-376: 3-D Commissioning for An Image-Guided Small Animal Micro- Irradiation Platform

    Qian, X; Wuu, C [Columbia University, NY, NY (United States); Admovics, J [Rider University, Lawrencsville, NJ (United States)

    2014-06-01

    Purpose: A 3-D radiochromic plastic dosimeter has been used to cross-test the isocentricity of a high resolution image-guided small animal microirradiation platform. In this platform, the mouse stage rotating for cone beam CT imaging is perpendicular to the gantry rotation for sub-millimeter radiation delivery. A 3-D dosimeter can be used to verify both imaging and irradiation coordinates. Methods: A 3-D dosimeter and optical CT scanner were used in this study. In the platform, both mouse stage and gantry can rotate 360° with rotation axis perpendicular to each other. Isocentricity and coincidence of mouse stage and gantry rotations were evaluated using star patterns. A 3-D dosimeter was placed on mouse stage with center at platform isocenter approximately. For CBCT isocentricity, with gantry moved to 90°, the mouse stage rotated horizontally while the x-ray was delivered to the dosimeter at certain angles. For irradiation isocentricity, the gantry rotated 360° to deliver beams to the dosimeter at certain angles for star patterns. The uncertainties and agreement of both CBCT and irradiation isocenters can be determined from the star patterns. Both procedures were repeated 3 times using 3 dosimeters to determine short-term reproducibility. Finally, dosimeters were scanned using optical CT scanner to obtain the results. Results: The gantry isocentricity is 0.9 ± 0.1 mm and mouse stage rotation isocentricity is about 0.91 ± 0.11 mm. Agreement between the measured isocenters of irradiation and imaging coordinates was determined. The short-term reproducibility test yielded 0.5 ± 0.1 mm between the imaging isocenter and the irradiation isocenter, with a maximum displacement of 0.7 ± 0.1 mm. Conclusion: The 3-D dosimeter can be very useful in precise verification of targeting for a small animal irradiation research. In addition, a single 3-D dosimeter can provide information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  2. Treatment planning for image-guided neuro-vascular interventions using patient-specific 3D printed phantoms

    Russ, M.; O'Hara, R.; Setlur Nagesh, S. V.; Mokin, M.; Jimenez, C.; Siddiqui, A.; Bednarek, D.; Rudin, S.; Ionita, C.

    2015-03-01

    Minimally invasive endovascular image-guided interventions (EIGIs) are the preferred procedures for treatment of a wide range of vascular disorders. Despite benefits including reduced trauma and recovery time, EIGIs have their own challenges. Remote catheter actuation and challenging anatomical morphology may lead to erroneous endovascular device selections, delays or even complications such as vessel injury. EIGI planning using 3D phantoms would allow interventionists to become familiarized with the patient vessel anatomy by first performing the planned treatment on a phantom under standard operating protocols. In this study the optimal workflow to obtain such phantoms from 3D data for interventionist to practice on prior to an actual procedure was investigated. Patientspecific phantoms and phantoms presenting a wide range of challenging geometries were created. Computed Tomographic Angiography (CTA) data was uploaded into a Vitrea 3D station which allows segmentation and resulting stereo-lithographic files to be exported. The files were uploaded using processing software where preloaded vessel structures were included to create a closed-flow vasculature having structural support. The final file was printed, cleaned, connected to a flow loop and placed in an angiographic room for EIGI practice. Various Circle of Willis and cardiac arterial geometries were used. The phantoms were tested for ischemic stroke treatment, distal catheter navigation, aneurysm stenting and cardiac imaging under angiographic guidance. This method should allow for adjustments to treatment plans to be made before the patient is actually in the procedure room and enabling reduced risk of peri-operative complications or delays.

  3. Involved-Site Image-Guided Intensity Modulated Versus 3D Conformal Radiation Therapy in Early Stage Supradiaphragmatic Hodgkin Lymphoma

    Purpose: Image-guided intensity modulated radiation therapy (IG-IMRT) allows for margin reduction and highly conformal dose distribution, with consistent advantages in sparing of normal tissues. The purpose of this retrospective study was to compare involved-site IG-IMRT with involved-site 3D conformal RT (3D-CRT) in the treatment of early stage Hodgkin lymphoma (HL) involving the mediastinum, with efficacy and toxicity as primary clinical endpoints. Methods and Materials: We analyzed 90 stage IIA HL patients treated with either involved-site 3D-CRT or IG-IMRT between 2005 and 2012 in 2 different institutions. Inclusion criteria were favorable or unfavorable disease (according to European Organization for Research and Treatment of Cancer criteria), complete response after 3 to 4 cycles of an adriamycin- bleomycin-vinblastine-dacarbazine (ABVD) regimen plus 30 Gy as total radiation dose. Exclusion criteria were chemotherapy other than ABVD, partial response after ABVD, total radiation dose other than 30 Gy. Clinical endpoints were relapse-free survival (RFS) and acute toxicity. Results: Forty-nine patients were treated with 3D-CRT (54.4%) and 41 with IG-IMRT (45.6%). Median follow-up time was 54.2 months for 3D-CRT and 24.1 months for IG-IMRT. No differences in RFS were observed between the 2 groups, with 1 relapse each. Three-year RFS was 98.7% for 3D-CRT and 100% for IG-IMRT. Grade 2 toxicity events, mainly mucositis, were recorded in 32.7% of 3D-CRT patients (16 of 49) and in 9.8% of IG-IMRT patients (4 of 41). IG-IMRT was significantly associated with a lower incidence of grade 2 acute toxicity (P=.043). Conclusions: RFS rates at 3 years were extremely high in both groups, albeit the median follow-up time is different. Acute tolerance profiles were better for IG-IMRT than for 3D-CRT. Our preliminary results support the clinical safety and efficacy of advanced RT planning and delivery techniques in patients affected with early stage HL, achieving complete

  4. Involved-Site Image-Guided Intensity Modulated Versus 3D Conformal Radiation Therapy in Early Stage Supradiaphragmatic Hodgkin Lymphoma

    Filippi, Andrea Riccardo, E-mail: andreariccardo.filippi@unito.it [Department of Oncology, University of Torino, Torino (Italy); Ciammella, Patrizia [Radiation Therapy Unit, Department of Oncology and Advanced Technology, ASMN Hospital IRCCS, Reggio Emilia (Italy); Piva, Cristina; Ragona, Riccardo [Department of Oncology, University of Torino, Torino (Italy); Botto, Barbara [Hematology, Città della Salute e della Scienza, Torino (Italy); Gavarotti, Paolo [Hematology, University of Torino and Città della Salute e della Scienza, Torino (Italy); Merli, Francesco [Hematology Unit, ASMN Hospital IRCCS, Reggio Emilia (Italy); Vitolo, Umberto [Hematology, Città della Salute e della Scienza, Torino (Italy); Iotti, Cinzia [Radiation Therapy Unit, Department of Oncology and Advanced Technology, ASMN Hospital IRCCS, Reggio Emilia (Italy); Ricardi, Umberto [Department of Oncology, University of Torino, Torino (Italy)

    2014-06-01

    Purpose: Image-guided intensity modulated radiation therapy (IG-IMRT) allows for margin reduction and highly conformal dose distribution, with consistent advantages in sparing of normal tissues. The purpose of this retrospective study was to compare involved-site IG-IMRT with involved-site 3D conformal RT (3D-CRT) in the treatment of early stage Hodgkin lymphoma (HL) involving the mediastinum, with efficacy and toxicity as primary clinical endpoints. Methods and Materials: We analyzed 90 stage IIA HL patients treated with either involved-site 3D-CRT or IG-IMRT between 2005 and 2012 in 2 different institutions. Inclusion criteria were favorable or unfavorable disease (according to European Organization for Research and Treatment of Cancer criteria), complete response after 3 to 4 cycles of an adriamycin- bleomycin-vinblastine-dacarbazine (ABVD) regimen plus 30 Gy as total radiation dose. Exclusion criteria were chemotherapy other than ABVD, partial response after ABVD, total radiation dose other than 30 Gy. Clinical endpoints were relapse-free survival (RFS) and acute toxicity. Results: Forty-nine patients were treated with 3D-CRT (54.4%) and 41 with IG-IMRT (45.6%). Median follow-up time was 54.2 months for 3D-CRT and 24.1 months for IG-IMRT. No differences in RFS were observed between the 2 groups, with 1 relapse each. Three-year RFS was 98.7% for 3D-CRT and 100% for IG-IMRT. Grade 2 toxicity events, mainly mucositis, were recorded in 32.7% of 3D-CRT patients (16 of 49) and in 9.8% of IG-IMRT patients (4 of 41). IG-IMRT was significantly associated with a lower incidence of grade 2 acute toxicity (P=.043). Conclusions: RFS rates at 3 years were extremely high in both groups, albeit the median follow-up time is different. Acute tolerance profiles were better for IG-IMRT than for 3D-CRT. Our preliminary results support the clinical safety and efficacy of advanced RT planning and delivery techniques in patients affected with early stage HL, achieving complete

  5. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  6. Heterodyne 3D ghost imaging

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  7. Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery.

    Nosrati, Masoud S; Abugharbieh, Rafeef; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Hamarneh, Ghassan

    2016-01-01

    In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness. PMID:26151933

  8. Quantitative Assessment of Variational Surface Reconstruction from Sparse Point Clouds in Freehand 3D Ultrasound Imaging during Image-Guided Tumor Ablation

    Shuangcheng Deng

    2016-04-01

    Full Text Available Surface reconstruction for freehand 3D ultrasound is used to provide 3D visualization of a VOI (volume of interest during image-guided tumor ablation surgery. This is a challenge because the recorded 2D B-scans are not only sparse but also non-parallel. To solve this issue, we established a framework to reconstruct the surface of freehand 3D ultrasound imaging in 2011. The key technique for surface reconstruction in that framework is based on variational interpolation presented by Greg Turk for shape transformation and is named Variational Surface Reconstruction (VSR. The main goal of this paper is to evaluate the quality of surface reconstructions, especially when the input data are extremely sparse point clouds from freehand 3D ultrasound imaging, using four methods: Ball Pivoting, Power Crust, Poisson, and VSR. Four experiments are conducted, and quantitative metrics, such as the Hausdorff distance, are introduced for quantitative assessment. The experiment results show that the performance of the proposed VSR method is the best of the four methods at reconstructing surface from sparse data. The VSR method can produce a close approximation to the original surface from as few as two contours, whereas the other three methods fail to do so. The experiment results also illustrate that the reproducibility of the VSR method is the best of the four methods.

  9. 3D Imager and Method for 3D imaging

    Kumar, P.; Staszewski, R.; Charbon, E.

    2013-01-01

    3D imager comprising at least one pixel, each pixel comprising a photodetectorfor detecting photon incidence and a time-to-digital converter system configured for referencing said photon incidence to a reference clock, and further comprising a reference clock generator provided for generating the re

  10. SU-E-J-55: End-To-End Effectiveness Analysis of 3D Surface Image Guided Voluntary Breath-Holding Radiotherapy for Left Breast

    Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patient position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures

  11. SU-E-J-55: End-To-End Effectiveness Analysis of 3D Surface Image Guided Voluntary Breath-Holding Radiotherapy for Left Breast

    Lin, M; Feigenberg, S [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patient position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures

  12. 3D vector flow imaging

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  13. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 × 376 × 630 voxels. Conclusions: The proposed needle segmentation

  14. Advanced 3-D Ultrasound Imaging

    Rasmussen, Morten Fischer

    been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... and removes the need to integrate custom made electronics into the probe. A downside of row-column addressing 2-D arrays is the creation of secondary temporal lobes, or ghost echoes, in the point spread function. In the second part of the scientific contributions, row-column addressing of 2-D arrays...... was investigated. An analysis of how the ghost echoes can be attenuated was presented.Attenuating the ghost echoes were shown to be achieved by minimizing the first derivative of the apodization function. In the literature, a circular symmetric apodization function was proposed. A new apodization layout...

  15. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  16. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    Bertrand, M.M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Macri, F.; Beregi, J.P. [Nimes University Hospital, University Montpellier 1, Radiology Department, Nimes (France); Mazars, R.; Prudhomme, M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Nimes University Hospital, University Montpellier 1, Digestive Surgery Department, Nimes (France); Droupy, S. [Nimes University Hospital, University Montpellier 1, Urology-Andrology Department, Nimes (France)

    2014-08-15

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  17. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    Qiu Wu [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Yuchi Ming; Ding Mingyue [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Tessier, David; Fenster, Aaron [Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario N6A 5K8 (Canada)

    2013-04-15

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions

  18. Improvement in toxicity in high risk prostate cancer patients treated with image-guided intensity-modulated radiotherapy compared to 3D conformal radiotherapy without daily image guidance

    Image-guided radiotherapy (IGRT) facilitates the delivery of a very precise radiation dose. In this study we compare the toxicity and biochemical progression-free survival between patients treated with daily image-guided intensity-modulated radiotherapy (IG-IMRT) and 3D conformal radiotherapy (3DCRT) without daily image guidance for high risk prostate cancer (PCa). A total of 503 high risk PCa patients treated with radiotherapy (RT) and endocrine treatment between 2000 and 2010 were retrospectively reviewed. 115 patients were treated with 3DCRT, and 388 patients were treated with IG-IMRT. 3DCRT patients were treated to 76 Gy and without daily image guidance and with 1–2 cm PTV margins. IG-IMRT patients were treated to 78 Gy based on daily image guidance of fiducial markers, and the PTV margins were 5–7 mm. Furthermore, the dose-volume constraints to both the rectum and bladder were changed with the introduction of IG-IMRT. The 2-year actuarial likelihood of developing grade > = 2 GI toxicity following RT was 57.3% in 3DCRT patients and 5.8% in IG-IMRT patients (p < 0.001). For GU toxicity the numbers were 41.8% and 29.7%, respectively (p = 0.011). On multivariate analysis, 3DCRT was associated with a significantly increased risk of developing grade > = 2 GI toxicity compared to IG-IMRT (p < 0.001, HR = 11.59 [CI: 6.67-20.14]). 3DCRT was also associated with an increased risk of developing GU toxicity compared to IG-IMRT. The 3-year actuarial biochemical progression-free survival probability was 86.0% for 3DCRT and 90.3% for IG-IMRT (p = 0.386). On multivariate analysis there was no difference in biochemical progression-free survival between 3DCRT and IG-IMRT. The difference in toxicity can be attributed to the combination of the IMRT technique with reduced dose to organs-at-risk, daily image guidance and margin reduction

  19. Projector-Based Augmented Reality for Intuitive Intraoperative Guidance in Image-Guided 3D Interstitial Brachytherapy

    Purpose: The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. Methods and Materials: The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. Results: In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). Conclusions: The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation

  20. 3D Chaotic Functions for Image Encryption

    Pawan N. Khade

    2012-05-01

    Full Text Available This paper proposes the chaotic encryption algorithm based on 3D logistic map, 3D Chebyshev map, and 3D, 2D Arnolds cat map for color image encryption. Here the 2D Arnolds cat map is used for image pixel scrambling and 3D Arnolds cat map is used for R, G, and B component substitution. 3D Chebyshev map is used for key generation and 3D logistic map is used for image scrambling. The use of 3D chaotic functions in the encryption algorithm provide more security by using the, shuffling and substitution to the encrypted image. The Chebyshev map is used for public key encryption and distribution of generated private keys.

  1. The impact of 3D image guided prostate brachytherapy on therapeutic ratio: the Quebec University Hospital experience

    Purpose: to evaluate the impact of adaptative image-guided brachytherapy on therapeutic outcome and toxicity in prostate cancer. Materials and methods: the 1110 first patients treated at the C.H.U.Q.-l'Hotel-Dieu de Quebec were divided in five groups depending on the technique used for the implantation, the latest being intra operative treatment planning. Biochemical disease free survival (5-b.D.F.S.), toxicities and dosimetric parameters were compared between the groups. Results: 5-b.D.F.S. (A.S.T.R.O. + Houston) were of 88.5% and 90.5% for the whole cohort. The use of intra operative treatment planning resulted in better dosimetric parameters. Clinically, this resulted in a decreased use of urethral catheterization, from 18.8% in group 1 to 5.2% in group 5, and in a reduction in severe acute urinary side effects (21.3 vs 33.3% P = 0.01) when compared with pre-planning. There was also less late gastrointestinal side effects (groups 5 vs 1: 26.6 vs 43.2% P < 0.05). Finally, when compared with pre-planning, intra operative treatment planning was associated with a smaller reduction between planned D90 and the dose calculated at the CT scan 1 month after the implant (38 vs 66 Gy). Conclusion: the evolution of prostate brachytherapy technique toward intra operative treatment planning allowed dosimetric gains which resulted in significant clinical benefits by increasing the therapeutic ratio mainly through a decreased urinary toxicity. A longer follow-up will answer the question whether there is an impact on 5-b.D.F.S.. (authors)

  2. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm3 or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each tumor was

  3. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    Martin, Peter R., E-mail: pmarti46@uwo.ca [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Cool, Derek W. [Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7, Canada and Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Romagnoli, Cesare [Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Fenster, Aaron [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Ward, Aaron D. [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7 (Canada)

    2014-07-15

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each

  4. 3D Reconstruction of NMR Images

    Peter Izak; Milan Smetana; Libor Hargas; Miroslav Hrianka; Pavol Spanik

    2007-01-01

    This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  5. 3D ultrafast ultrasound imaging in vivo

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability. (fast track communication)

  6. 3D-printed guiding templates for improved osteosarcoma resection

    Ma, Limin; Zhou, Ye; Zhu, Ye; Lin, Zefeng; Wang, Yingjun; Zhang, Yu; Xia, Hong; Mao, Chuanbin

    2016-03-01

    Osteosarcoma resection is challenging due to the variable location of tumors and their proximity with surrounding tissues. It also carries a high risk of postoperative complications. To overcome the challenge in precise osteosarcoma resection, computer-aided design (CAD) was used to design patient-specific guiding templates for osteosarcoma resection on the basis of the computer tomography (CT) scan and magnetic resonance imaging (MRI) of the osteosarcoma of human patients. Then 3D printing technique was used to fabricate the guiding templates. The guiding templates were used to guide the osteosarcoma surgery, leading to more precise resection of the tumorous bone and the implantation of the bone implants, less blood loss, shorter operation time and reduced radiation exposure during the operation. Follow-up studies show that the patients recovered well to reach a mean Musculoskeletal Tumor Society score of 27.125.

  7. Dosimetric analysis of 3D image-guided HDR brachytherapy planning for the treatment of cervical cancer: is point A-based dose prescription still valid in image-guided brachytherapy?

    Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M Saiful

    2011-01-01

    The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 ± 4.3 Gy. This is significantly higher (p IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities. PMID:20488690

  8. Phenotypic transition maps of 3D breast acini obtained by imaging-guided agent-based modeling

    Tang, Jonathan; Enderling, Heiko; Becker-Weimann, Sabine; Pham, Christopher; Polyzos, Aris; Chen, Chen-Yi; Costes, Sylvain V

    2011-02-18

    We introduce an agent-based model of epithelial cell morphogenesis to explore the complex interplay between apoptosis, proliferation, and polarization. By varying the activity levels of these mechanisms we derived phenotypic transition maps of normal and aberrant morphogenesis. These maps identify homeostatic ranges and morphologic stability conditions. The agent-based model was parameterized and validated using novel high-content image analysis of mammary acini morphogenesis in vitro with focus on time-dependent cell densities, proliferation and death rates, as well as acini morphologies. Model simulations reveal apoptosis being necessary and sufficient for initiating lumen formation, but cell polarization being the pivotal mechanism for maintaining physiological epithelium morphology and acini sphericity. Furthermore, simulations highlight that acinus growth arrest in normal acini can be achieved by controlling the fraction of proliferating cells. Interestingly, our simulations reveal a synergism between polarization and apoptosis in enhancing growth arrest. After validating the model with experimental data from a normal human breast line (MCF10A), the system was challenged to predict the growth of MCF10A where AKT-1 was overexpressed, leading to reduced apoptosis. As previously reported, this led to non growth-arrested acini, with very large sizes and partially filled lumen. However, surprisingly, image analysis revealed a much lower nuclear density than observed for normal acini. The growth kinetics indicates that these acini grew faster than the cells comprising it. The in silico model could not replicate this behavior, contradicting the classic paradigm that ductal carcinoma in situ is only the result of high proliferation and low apoptosis. Our simulations suggest that overexpression of AKT-1 must also perturb cell-cell and cell-ECM communication, reminding us that extracellular context can dictate cellular behavior.

  9. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J. [Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Bioinformatics, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Computer Science, Rutgers, State University of New Jersey, Piscataway, New Jersey 08854 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States)

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  10. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  11. 3D Reconstruction of NMR Images

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  12. Multiplane 3D superresolution optical fluctuation imaging

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  13. Nonlaser-based 3D surface imaging

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  14. Designing 3D Mesenchymal Stem Cell Sheets Merging Magnetic and Fluorescent Features: When Cell Sheet Technology Meets Image-Guided Cell Therapy

    Rahmi, Gabriel; Pidial, Laetitia; Silva, Amanda K. A.; Blondiaux, Eléonore; Meresse, Bertrand; Gazeau, Florence; Autret, Gwennhael; Balvay, Daniel; Cuenod, Charles André; Perretta, Silvana; Tavitian, Bertrand; Wilhelm, Claire; Cellier, Christophe; Clément, Olivier

    2016-01-01

    Cell sheet technology opens new perspectives in tissue regeneration therapy by providing readily implantable, scaffold-free 3D tissue constructs. Many studies have focused on the therapeutic effects of cell sheet implantation while relatively little attention has concerned the fate of the implanted cells in vivo. The aim of the present study was to track longitudinally the cells implanted in the cell sheets in vivo in target tissues. To this end we (i) endowed bone marrow-derived mesenchymal stem cells (BMMSCs) with imaging properties by double labeling with fluorescent and magnetic tracers, (ii) applied BMMSC cell sheets to a digestive fistula model in mice, (iii) tracked the BMMSC fate in vivo by MRI and probe-based confocal laser endomicroscopy (pCLE), and (iv) quantified healing of the fistula. We show that image-guided longitudinal follow-up can document both the fate of the cell sheet-derived BMMSCs and their healing capacity. Moreover, our theranostic approach informs on the mechanism of action, either directly by integration of cell sheet-derived BMMSCs into the host tissue or indirectly through the release of signaling molecules in the host tissue. Multimodal imaging and clinical evaluation converged to attest that cell sheet grafting resulted in minimal clinical inflammation, improved fistula healing, reduced tissue fibrosis and enhanced microvasculature density. At the molecular level, cell sheet transplantation induced an increase in the expression of anti-inflammatory cytokines (TGF-ß2 and IL-10) and host intestinal growth factors involved in tissue repair (EGF and VEGF). Multimodal imaging is useful for tracking cell sheets and for noninvasive follow-up of their regenerative properties. PMID:27022420

  15. Miniaturized 3D microscope imaging system

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  16. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  17. 3D Stereo Visualization for Mobile Robot Tele-Guide

    Livatino, Salvatore

    2006-01-01

    learning and decision performance. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks, while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work...... intends to contribute to this aspect by investigating stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. The purpose of this work is also to investigate how user performance may vary when employing different display......The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. In particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, as well as faster system...

  18. ICER-3D Hyperspectral Image Compression Software

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  19. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    Yakang Dai; Jian Zheng; Yuetao Yang; Duojie Kuai; Xiaodong Yang

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume render...

  20. Acquisition and applications of 3D images

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  1. 3D camera tracking from disparity images

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  2. Heat Equation to 3D Image Segmentation

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  3. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    Cambridge : The Electromagnetics Academy, 2010, s. 1043-1046. ISBN 978-1-934142-14-1. [PIERS 2010 Cambridge. Cambridge (US), 05.07.2010-08.07.2010] R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : 3D reconstruction * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  4. Feasibility of 3D harmonic contrast imaging

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; Cate, ten F.; Jong, de N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suit

  5. 3D Membrane Imaging and Porosity Visualization

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  6. 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping of......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....... treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  7. Backhoe 3D "gold standard" image

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  8. Metrological characterization of 3D imaging devices

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  9. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  10. 3D IMAGING USING COHERENT SYNCHROTRON RADIATION

    Peter Cloetens

    2011-05-01

    Full Text Available Three dimensional imaging is becoming a standard tool for medical, scientific and industrial applications. The use of modem synchrotron radiation sources for monochromatic beam micro-tomography provides several new features. Along with enhanced signal-to-noise ratio and improved spatial resolution, these include the possibility of quantitative measurements, the easy incorporation of special sample environment devices for in-situ experiments, and a simple implementation of phase imaging. These 3D approaches overcome some of the limitations of 2D measurements. They require new tools for image analysis.

  11. 3D in Photoshop The Ultimate Guide for Creative Professionals

    Gee, Zorana

    2010-01-01

    This is the first book of its kind that shows you everything you need to know to create or integrate 3D into your designs using Photoshop CS5 Extended. If you are completely new to 3D, you'll find the great tips and tricks in 3D in Photoshop invaluable as you get started. There is also a wealth of detailed technical insight for those who want more. Written by the true experts - Adobe's own 3D team - and with contributions from some of the best and brightest digital artists working today, this reference guide will help you to create a comprehensive workflow that suits your specific needs. Along

  12. 3D Model Assisted Image Segmentation

    Jayawardena, Srimal; Hutter, Marcus

    2012-01-01

    The problem of segmenting a given image into coherent regions is important in Computer Vision and many industrial applications require segmenting a known object into its components. Examples include identifying individual parts of a component for process control work in a manufacturing plant and identifying parts of a car from a photo for automatic damage detection. Unfortunately most of an object's parts of interest in such applications share the same pixel characteristics, having similar colour and texture. This makes segmenting the object into its components a non-trivial task for conventional image segmentation algorithms. In this paper, we propose a "Model Assisted Segmentation" method to tackle this problem. A 3D model of the object is registered over the given image by optimising a novel gradient based loss function. This registration obtains the full 3D pose from an image of the object. The image can have an arbitrary view of the object and is not limited to a particular set of views. The segmentation...

  13. Micromachined Ultrasonic Transducers for 3-D Imaging

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... capable of producing 62+62-element row-column addressed CMUT arrays with negligible charging issues. The arrays include an integrated apodization, which reduces the ghost echoes produced by the edge waves in such arrays by 15:8 dB. The acoustical cross-talk is measured on fabricated arrays, showing a 24 d...

  14. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment

  15. Compact multi-projection 3D display system with light-guide projection.

    Lee, Chang-Kun; Park, Soon-gi; Moon, Seokil; Hong, Jong-Young; Lee, Byoungho

    2015-11-01

    We propose a compact multi-projection based multi-view 3D display system using an optical light-guide, and perform an analysis of the characteristics of the image for distortion compensation via an optically equivalent model of the light-guide. The projected image traveling through the light-guide experiences multiple total internal reflections at the interface. As a result, the projection distance in the horizontal direction is effectively reduced to the thickness of the light-guide, and the projection part of the multi-projection based multi-view 3D display system is minimized. In addition, we deduce an equivalent model of such a light-guide to simplify the analysis of the image distortion in the light-guide. From the equivalent model, the focus of the image is adjusted, and pre-distorted images for each projection unit are calculated by two-step image rectification in air and the material. The distortion-compensated view images are represented on the exit surface of the light-guide when the light-guide is located in the intended position. Viewing zones are generated by combining the light-guide projection system, a vertical diffuser, and a Fresnel lens. The feasibility of the proposed method is experimentally verified and a ten-view 3D display system with a minimized structure is implemented. PMID:26561163

  16. Image-guided installation of 3D-printed patient-specific implant and its application in pelvic tumor resection and reconstruction surgery.

    Chen, Xiaojun; Xu, Lu; Wang, Yiping; Hao, Yongqiang; Wang, Liao

    2016-03-01

    Nowadays, the diagnosis and treatment of pelvic sarcoma pose a major surgical challenge for reconstruction in orthopedics. With the development of manufacturing technology, the metal 3D-printed customized implants have brought revolution for the limb-salvage resection and reconstruction surgery. However, the tumor resection is not without risk and the precise implant placement is very difficult due to the anatomic intricacies of the pelvis. In this study, a surgical navigation system including the implant calibration algorithm has been developed, so that the surgical instruments and the 3D-printed customized implant can be tracked and rendered on the computer screen in real time, minimizing the risks and improving the precision of the surgery. Both the phantom experiment and the pilot clinical case study presented the feasibility of our computer-aided surgical navigation system. According to the accuracy evaluation experiment, the precision of customized implant installation can be improved three to five times (TRE: 0.75±0.18 mm) compared with the non-navigated implant installation after the guided osteotomy (TRE: 3.13±1.28 mm), which means it is sufficient to meet the clinical requirements of the pelvic reconstruction. However, more clinical trials will be conducted in the future work for the validation of the reliability and efficiency of our navigation system. PMID:26652978

  17. 3-D SAR image formation from sparse aperture data using 3-D target grids

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  18. 3D Buildings Extraction from Aerial Images

    Melnikova, O.; Prandi, F.

    2011-09-01

    This paper introduces a semi-automatic method for buildings extraction through multiple-view aerial image analysis. The advantage of the used semi-automatic approach is that it allows processing of each building individually finding the parameters of buildings features extraction more precisely for each area. On the early stage the presented technique uses an extraction of line segments that is done only inside of areas specified manually. The rooftop hypothesis is used further to determine a subset of quadrangles, which could form building roofs from a set of extracted lines and corners obtained on the previous stage. After collecting of all potential roof shapes in all images overlaps, the epipolar geometry is applied to find matching between images. This allows to make an accurate selection of building roofs removing false-positive ones and to identify their global 3D coordinates given camera internal parameters and coordinates. The last step of the image matching is based on geometrical constraints in contrast to traditional correlation. The correlation is applied only in some highly restricted areas in order to find coordinates more precisely, in such a way significantly reducing processing time of the algorithm. The algorithm has been tested on a set of Milan's aerial images and shows highly accurate results.

  19. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  20. Photogrammetric 3D reconstruction using mobile imaging

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  1. Helical CT scanner - 3D imaging and CT fluoroscopy

    It has been over twenty years since the introduction of X-ray CT. In recent years, the topic of helical scanning has dominated the area of technical development. With helical scanning now being used routinely, the traditional concept of the X-ray CT as a device for obtaining axial images of the body in slices has given way to that of one for obtaining images in volumes. For instance, the ability of helical scanning to acquire sequential images in the direction of the body axis makes it ideal for creating three dimensional (3-D) images, and has in fact led to the use of 3-D images in clinical practice. In addition, with helical scanning, imaging of organs such as the liver or lung can be performed in several tens of seconds, as opposed to a few minutes that it used to take. This has resulted not only in reduced time for the patient to spend under constraint for imaging but also to changes in diagnostic methods. The question, 'Would it be possible to perform reconstruction while scanning and to see resulting images in real time ?' is another issue which has been taken up, and it has been answered by CT Fluoroscopy. It makes it possible to see CT images in real time during sequential scanning, and from this development, applications such as CT-guided biopsy and CT-navigated surgery has been investigated and have been realized. Other possibilities to create a whole new series of diagnostic methods and results. (author)

  2. 3D object-oriented image analysis in 3D geophysical modelling

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  3. 3D Guided Wave Motion Analysis on Laminated Composites

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  4. Vision-Guided Robot Control for 3D Object Recognition and Manipulation

    S. Q. Xie; Haemmerle, E.; Cheng, Y; Gamage, P

    2008-01-01

    Research into a fully automated vision-guided robot for identifying, visualising and manipulating 3D objects with complicated shapes is still undergoing major development world wide. The current trend is toward the development of more robust, intelligent and flexible vision-guided robot systems to operate in highly dynamic environments. The theoretical basis of image plane dynamics and robust image-based robot systems capable of manipulating moving objects still need further research. Researc...

  5. Handbook of 3D machine vision optical metrology and imaging

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  6. Progress in 3D imaging and display by integral imaging

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  7. Perception of detail in 3D images

    Heyndrickx, I.; Kaptein, R.

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads t

  8. Performance assessment of 3D surface imaging technique for medical imaging applications

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  9. 3D Image Synthesis for B—Reps Objects

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  10. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  11. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  12. iClone 431 3D Animation Beginner's Guide

    McCallum, MD

    2011-01-01

    This book is a part of the Beginner's guide series, wherein you will quickly start doing tasks with precise instructions. Then the tasks will be followed by explanation and then a challenging task or a multiple choice question about the topic just covered. Do you have a story to tell or an idea to illustrate? This book is aimed at film makers, video producers/compositors, vxf artists or 3D artists/designers like you who have no previous experience with iClone. If you have that drive inside you to entertain people via the internet on sites like YouTube or Vimeo, create a superb presentation vid

  13. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  14. Computer-based image analysis in radiological diagnostics and image-guided therapy 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    Beier, J

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the softw...

  15. A 3D image analysis tool for SPECT imaging

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  16. Light field display and 3D image reconstruction

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  17. 3D Imaging with Structured Illumination for Advanced Security Applications

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  18. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas; Bai, Li

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized...

  19. 3D augmented reality with integral imaging display

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  20. 3D Interpolation Method for CT Images of the Lung

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  1. Prospective comparison of T2w-MRI and dynamic-contrast-enhanced MRI, 3D-MR spectroscopic imaging or diffusion-weighted MRI in repeat TRUS-guided biopsies

    Portalez, Daniel [Clinique Pasteur, 45, Department of Radiology, Toulouse (France); Rollin, Gautier; Mouly, Patrick; Jonca, Frederic; Malavaud, Bernard [Hopital de Rangueil, Department of Urology, Toulouse Cedex 9 (France); Leandri, Pierre [Clinique Saint Jean, 20, Department of Urology, Toulouse (France); Elman, Benjamin [Clinique Pasteur, 45, Department of Urology, Toulouse (France)

    2010-12-15

    To compare T2-weighted MRI and functional MRI techniques in guiding repeat prostate biopsies. Sixty-eight patients with a history of negative biopsies, negative digital rectal examination and elevated PSA were imaged before repeat biopsies. Dichotomous criteria were used with visual validation of T2-weighted MRI, dynamic contrast-enhanced MRI and literature-derived cut-offs for 3D-spectroscopy MRI (choline-creatine-to-citrate ratio >0.86) and diffusion-weighted imaging (ADC x 10{sup 3} mm{sup 2}/s < 1.24). For each segment and MRI technique, results were rendered as being suspicious/non-suspicious for malignancy. Sextant biopsies, transition zone biopsies and at least two additional biopsies of suspicious areas were taken. In the peripheral zones, 105/408 segments and in the transition zones 19/136 segments were suspicious according to at least one MRI technique. A total of 28/68 (41.2%) patients were found to have cancer. Diffusion-weighted imaging exhibited the highest positive predictive value (0.52) compared with T2-weighted MRI (0.29), dynamic contrast-enhanced MRI (0.33) and 3D-spectroscopy MRI (0.25). Logistic regression showed the probability of cancer in a segment increasing 12-fold when T2-weighted and diffusion-weighted imaging MRI were both suspicious (63.4%) compared with both being non-suspicious (5.2%). The proposed system of analysis and reporting could prove clinically relevant in the decision whether to repeat targeted biopsies. (orig.)

  2. Prospective comparison of T2w-MRI and dynamic-contrast-enhanced MRI, 3D-MR spectroscopic imaging or diffusion-weighted MRI in repeat TRUS-guided biopsies

    To compare T2-weighted MRI and functional MRI techniques in guiding repeat prostate biopsies. Sixty-eight patients with a history of negative biopsies, negative digital rectal examination and elevated PSA were imaged before repeat biopsies. Dichotomous criteria were used with visual validation of T2-weighted MRI, dynamic contrast-enhanced MRI and literature-derived cut-offs for 3D-spectroscopy MRI (choline-creatine-to-citrate ratio >0.86) and diffusion-weighted imaging (ADC x 103 mm2/s < 1.24). For each segment and MRI technique, results were rendered as being suspicious/non-suspicious for malignancy. Sextant biopsies, transition zone biopsies and at least two additional biopsies of suspicious areas were taken. In the peripheral zones, 105/408 segments and in the transition zones 19/136 segments were suspicious according to at least one MRI technique. A total of 28/68 (41.2%) patients were found to have cancer. Diffusion-weighted imaging exhibited the highest positive predictive value (0.52) compared with T2-weighted MRI (0.29), dynamic contrast-enhanced MRI (0.33) and 3D-spectroscopy MRI (0.25). Logistic regression showed the probability of cancer in a segment increasing 12-fold when T2-weighted and diffusion-weighted imaging MRI were both suspicious (63.4%) compared with both being non-suspicious (5.2%). The proposed system of analysis and reporting could prove clinically relevant in the decision whether to repeat targeted biopsies. (orig.)

  3. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume; Dufait, Remi; Jensen, Jørgen Arendt

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. ...

  4. Preliminary examples of 3D vector flow imaging

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev;

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental...... ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal...... acquisition as opposed to magnetic resonance imaging (MRI). The results demonstrate that the 3D TO method is capable of performing 3D vector flow imaging....

  5. Highway 3D model from image and lidar data

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  6. Diffractive optical element for creating visual 3D images.

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  7. 3-D capacitance density imaging system

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  8. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  9. 3D-LSI technology for image sensor

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  10. 3D Reconstruction in Magnetic Resonance Imaging

    Mikulka, J.; Bartušek, Karel

    2010-01-01

    Roč. 6, č. 7 (2010), s. 617-620. ISSN 1931-7360 R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : reconstruction methods * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  11. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  12. Acoustic 3D imaging of dental structures

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  13. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  14. Robot-assisted 3D-TRUS guided prostate brachytherapy: System integration and validation

    Current transperineal prostate brachytherapy uses transrectal ultrasound (TRUS) guidance and a template at a fixed position to guide needles along parallel trajectories. However, pubic arch interference (PAI) with the implant path obstructs part of the prostate from being targeted by the brachytherapy needles along parallel trajectories. To solve the PAI problem, some investigators have explored other insertion trajectories than parallel, i.e., oblique. However, parallel trajectory constraints in current brachytherapy procedure do not allow oblique insertion. In this paper, we describe a robot-assisted, three-dimensional (3D) TRUS guided approach to solve this problem. Our prototype consists of a commercial robot, and a 3D TRUS imaging system including an ultrasound machine, image acquisition apparatus and 3D TRUS image reconstruction, and display software. In our approach, we use the robot as a movable needle guide, i.e., the robot positions the needle before insertion, but the physician inserts the needle into the patient's prostate. In a later phase of our work, we will include robot insertion. By unifying the robot, ultrasound transducer, and the 3D TRUS image coordinate systems, the position of the template hole can be accurately related to 3D TRUS image coordinate system, allowing accurate and consistent insertion of the needle via the template hole into the targeted position in the prostate. The unification of the various coordinate systems includes two steps, i.e., 3D image calibration and robot calibration. Our testing of the system showed that the needle placement accuracy of the robot system at the 'patient's' skin position was 0.15 mm±0.06 mm, and the mean needle angulation error was 0.07 deg. . The fiducial localization error (FLE) in localizing the intersections of the nylon strings for image calibration was 0.13 mm, and the FLE in localizing the divots for robot calibration was 0.37 mm. The fiducial registration error for image calibration was 0

  15. A 3D Model Reconstruction Method Using Slice Images

    LI Hong-an; KANG Bao-sheng

    2013-01-01

    Aiming at achieving the high accuracy 3D model from slice images, a new model reconstruction method using slice im-ages is proposed. Wanting to extract the outermost contours from slice images, the method of the improved GVF-Snake model with optimized force field and ray method is employed. And then, the 3D model is reconstructed by contour connection using the im-proved shortest diagonal method and judgment function of contour fracture. The results show that the accuracy of reconstruction 3D model is improved.

  16. 3D Motion Parameters Determination Based on Binocular Sequence Images

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  17. Morphometrics, 3D Imaging, and Craniofacial Development.

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  18. The Essential Guide to 3D in Flash

    Olsson, Ronald A

    2010-01-01

    If you are an ActionScript developer or designer and you would like to work with 3D in Flash, this book is for you. You will learn the core Flash 3D concepts, using the open source Away3D engine as a primary tool. Once you have mastered these skills, you will be able to realize the possibilities that the available Flash 3D engines, languages, and technologies have to offer you with Flash and 3D.* Describes 3D concepts in theory and their implementation using Away3D* Dives right in to show readers how to quickly create an interactive, animated 3D scene, and builds on that experience throughout

  19. Software for 3D diagnostic image reconstruction and analysis

    Recent advances in computer technologies have opened new frontiers in medical diagnostics. Interesting possibilities are the use of three-dimensional (3D) imaging and the combination of images from different modalities. Software prepared in our laboratories devoted to 3D image reconstruction and analysis from computed tomography and ultrasonography is presented. In developing our software it was assumed that it should be applicable in standard medical practice, i.e. it should work effectively with a PC. An additional feature is the possibility of combining 3D images from different modalities. The reconstruction and data processing can be conducted using a standard PC, so low investment costs result in the introduction of advanced and useful diagnostic possibilities. The program was tested on a PC using DICOM data from computed tomography and TIFF files obtained from a 3D ultrasound system. The results of the anthropomorphic phantom and patient data were taken into consideration. A new approach was used to achieve spatial correlation of two independently obtained 3D images. The method relies on the use of four pairs of markers within the regions under consideration. The user selects the markers manually and the computer calculates the transformations necessary for coupling the images. The main software feature is the possibility of 3D image reconstruction from a series of two-dimensional (2D) images. The reconstructed 3D image can be: (1) viewed with the most popular methods of 3D image viewing, (2) filtered and processed to improve image quality, (3) analyzed quantitatively (geometrical measurements), and (4) coupled with another, independently acquired 3D image. The reconstructed and processed 3D image can be stored at every stage of image processing. The overall software performance was good considering the relatively low costs of the hardware used and the huge data sets processed. The program can be freely used and tested (source code and program available at

  20. BM3D Frames and Variational Image Deblurring

    Danielyan, Aram; Egiazarian, Karen

    2011-01-01

    A family of the Block Matching 3-D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patch-wise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem: one given by minimization of the single objective function and another based on the Nash equilibrium balance of two objective functions. The latter results in an algorithm where the denoising and deblurring operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the Nash equilibrium formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool.

  1. Image based 3D city modeling : Comparative study

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  2. 3D imaging of aortic aneurysma using spiral CT

    The use of 3D reconstructions (3D display technique and maximum intensity projection) in spiral CT for diagnostic evaluation of aortic aneurysma is explained. The data available showing 12 aneurysma of the abdominal and thoracic aorta (10 cases of aneurysma verum, 2 cases of aneurysma dissecans) were selected for verification of the value of 3D images in comparison to transversal displays of the CT. The 3D reconstructions of the spiral CT, other than the projection angiography, give insight into the vessel from various points of view. Such information is helpful for quickly gathering a picture of the volume and contours of a pathological process in the vessel. 3D post-processing of data is advisable if the comparison of tomograms and projection images produces findings of nuclear definition which need clarification prior to surgery. (orig.)

  3. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  4. Optical 3D watermark based digital image watermarking for telemedicine

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  5. Fully Automatic 3D Reconstruction of Histological Images

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  6. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  7. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume;

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix...... phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique...... cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels...

  8. C-arm CT-guided 3D navigation of percutaneous interventions

    So far C-arm CT images were predominantly used for a precise guidance of an endovascular or intra-arterial therapy. A novel combined 3D-navigation C-arm system now also allows cross-sectional and fluoroscopy controlled interventions. Studies have reported about successful CT-image guided navigation with C-arm systems in vertebroplasty. Insertion of the radiofrequency ablation probe is also conceivable for lung and liver tumors that had been labelled with lipiodol. In the future C-arm CT based navigation systems will probably allow simplified and safer complex interventions and simultaneously reduce radiation exposure. (orig.)

  9. Advanced 3-D Ultrasound Imaging.:3-D Synthetic Aperture Imaging and Row-column Addressing of 2-D Transducer Arrays

    Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-01-01

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinic...

  10. Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    Due to lack of imaging modalities to identify prostate cancer in vivo, current TRUS guided prostate biopsies are taken randomly. Consequently, many important cancers are missed during initial biopsies. The purpose of this study was to determine the potential clinical utility of a high-speed registration algorithm for a 3D prostate cancer atlas. This 3D prostate cancer atlas provides voxel-level likelihood of cancer and optimized biopsy locations on a template space (Zhan et al 2007). The atlas was constructed from 158 expert annotated, 3D reconstructed radical prostatectomy specimens outlined for cancers (Shen et al 2004). For successful clinical implementation, the prostate atlas needs to be registered to each patient's TRUS image with high registration accuracy in a time-efficient manner. This is implemented in a two-step procedure, the segmentation of the prostate gland from a patient's TRUS image followed by the registration of the prostate atlas. We have developed a fast registration algorithm suitable for clinical applications of this prostate cancer atlas. The registration algorithm was implemented on a graphical processing unit (GPU) to meet the critical processing speed requirements for atlas guided biopsy. A color overlay of the atlas superposed on the TRUS image was presented to help pick statistically likely regions known to harbor cancer. We validated our fast registration algorithm using computer simulations of two optimized 7- and 12-core biopsy protocols to maximize the overall detection rate. Using a GPU, patient's TRUS image segmentation and atlas registration took less than 12 s. The prostate cancer atlas guided 7- and 12-core biopsy protocols had cancer detection rates of 84.81% and 89.87% respectively when validated on the same set of data. Whereas the sextant biopsy approach without the utility of 3D cancer atlas detected only 70.5% of the cancers using the same histology data. We estimate 10-20% increase in prostate cancer detection rates

  11. Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    Narayanan, R; Suri, J S [Eigen Inc, Grass Valley, CA (United States); Werahera, P N; Barqawi, A; Crawford, E D [University of Colorado, Denver, CO (United States); Shinohara, K [University of California, San Francisco, CA (United States); Simoneau, A R [University of California, Irvine, CA (United States)], E-mail: jas.suri@eigen.com

    2008-10-21

    Due to lack of imaging modalities to identify prostate cancer in vivo, current TRUS guided prostate biopsies are taken randomly. Consequently, many important cancers are missed during initial biopsies. The purpose of this study was to determine the potential clinical utility of a high-speed registration algorithm for a 3D prostate cancer atlas. This 3D prostate cancer atlas provides voxel-level likelihood of cancer and optimized biopsy locations on a template space (Zhan et al 2007). The atlas was constructed from 158 expert annotated, 3D reconstructed radical prostatectomy specimens outlined for cancers (Shen et al 2004). For successful clinical implementation, the prostate atlas needs to be registered to each patient's TRUS image with high registration accuracy in a time-efficient manner. This is implemented in a two-step procedure, the segmentation of the prostate gland from a patient's TRUS image followed by the registration of the prostate atlas. We have developed a fast registration algorithm suitable for clinical applications of this prostate cancer atlas. The registration algorithm was implemented on a graphical processing unit (GPU) to meet the critical processing speed requirements for atlas guided biopsy. A color overlay of the atlas superposed on the TRUS image was presented to help pick statistically likely regions known to harbor cancer. We validated our fast registration algorithm using computer simulations of two optimized 7- and 12-core biopsy protocols to maximize the overall detection rate. Using a GPU, patient's TRUS image segmentation and atlas registration took less than 12 s. The prostate cancer atlas guided 7- and 12-core biopsy protocols had cancer detection rates of 84.81% and 89.87% respectively when validated on the same set of data. Whereas the sextant biopsy approach without the utility of 3D cancer atlas detected only 70.5% of the cancers using the same histology data. We estimate 10-20% increase in prostate cancer

  12. Recovering 3D human pose from monocular images

    Agarwal, Ankur; Triggs, Bill

    2006-01-01

    We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We eva...

  13. Development of a 3D ultrasound-guided prostate biopsy system

    Cool, Derek; Sherebrin, Shi; Izawa, Jonathan; Fenster, Aaron

    2007-03-01

    Biopsy of the prostate using ultrasound guidance is the clinical gold standard for diagnosis of prostate adenocarinoma. However, because early stage tumors are rarely visible under US, the procedure carries high false-negative rates and often patients require multiple biopsies before cancer is detected. To improve cancer detection, it is imperative that throughout the biopsy procedure, physicians know where they are within the prostate and where they have sampled during prior biopsies. The current biopsy procedure is limited to using only 2D ultrasound images to find and record target biopsy core sample sites. This information leaves ambiguity as the physician tries to interpret the 2D information and apply it to their 3D workspace. We have developed a 3D ultrasound-guided prostate biopsy system that provides 3D intra-biopsy information to physicians for needle guidance and biopsy location recording. The system is designed to conform to the workflow of the current prostate biopsy procedure, making it easier for clinical integration. In this paper, we describe the system design and validate its accuracy by performing an in vitro biopsy procedure on US/CT multi-modal patient-specific prostate phantoms. A clinical sextant biopsy was performed by a urologist on the phantoms and the 3D models of the prostates were generated with volume errors less than 4% and mean boundary errors of less than 1 mm. Using the 3D biopsy system, needles were guided to within 1.36 +/- 0.83 mm of 3D targets and the position of the biopsy sites were accurately localized to 1.06 +/- 0.89 mm for the two prostates.

  14. 3D Medical Image Segmentation Based on Rough Set Theory

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  15. 3D Image Display Courses for Information Media Students.

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  16. A near field 3D radar imaging technique

    Broquetas Ibars, Antoni

    1993-01-01

    The paper presents an algorithm which recovers a 3D reflectivity image of a target from near-field scattering measurements. Spherical wave nearfield illumination is used, in order to avoid a costly compact range installation to produce a plane wave illumination. The system is described and some simulated 3D reconstructions are included. The paper also presents a first experimental validation of this technique. Peer Reviewed

  17. Hybrid segmentation framework for 3D medical image analysis

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  18. Investigation of the feasability for 3D synthetic aperture imaging

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2003-01-01

    This paper investigates the feasibility of implementing real-time synthetic aperture 3D imaging on the experimental system developed at the Center for Fast Ultrasound Imaging using a 2D transducer array. The target array is a fully populated 32 × 32 3 MHz array with a half wavelength pitch. The...

  19. The European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) report on 3D CT-based in-room image guidance systems: A practical and technical review and guide

    The past decade has provided many technological advances in radiotherapy. The European Institute of Radiotherapy (EIR) was established by the European Society of Therapeutic Radiology and Oncology (ESTRO) to provide current consensus statement with evidence-based and pragmatic guidelines on topics of practical relevance for radiation oncology. This report focuses primarily on 3D CT-based in-room image guidance (3DCT-IGRT) systems. It will provide an overview and current standing of 3DCT-IGRT systems addressing the rationale, objectives, principles, applications, and process pathways, both clinical and technical for treatment delivery and quality assurance. These are reviewed for four categories of solutions; kV CT and kV CBCT (cone-beam CT) as well as MV CT and MV CBCT. It will also provide a framework and checklist to consider the capability and functionality of these systems as well as the resources needed for implementation. Two different but typical clinical cases (tonsillar and prostate cancer) using 3DCT-IGRT are illustrated with workflow processes via feedback questionnaires from several large clinical centres currently utilizing these systems. The feedback from these clinical centres demonstrates a wide variability based on local practices. This report whilst comprehensive is not exhaustive as this area of development remains a very active field for research and development. However, it should serve as a practical guide and framework for all professional groups within the field, focussed on clinicians, physicists and radiation therapy technologists interested in IGRT.

  20. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  1. Impulse Turbine with 3D Guide Vanes for Wave Energy Conversion

    Manabu TAKAO; Toshiaki SETOGUCHI; Kenji KANEKO; Shuichi NAGATA

    2006-01-01

    In this study, in order to achieve further improvement of the performance of an impulse turbine with fixed guide vanes for wave energy conversion, the effect of guide vane shape on the performance was investigated by experiment. The investigation was performed by model testing under steady flow condition. As a result, it was found that the efficiency of the turbine with 3D guide vanes are slightly superior to that of the turbine with 2D guide vanes because of the increase of torque by means of 3D guide vane, though pressure drop across the turbine for the 3D case is slightly higher than that for the 2D case.

  2. 3D Tongue Motion from Tagged and Cine MR Images

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z.; Lee, Junghoon; Stone, Maureen; Prince, Jerry L.

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach su ers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information...

  3. A compact mechatronic system for 3D ultrasound guided prostate interventions

    Purpose: Ultrasound imaging has improved the treatment of prostate cancer by producing increasingly higher quality images and influencing sophisticated targeting procedures for the insertion of radioactive seeds during brachytherapy. However, it is critical that the needles be placed accurately within the prostate to deliver the therapy to the planned location and avoid complications of damaging surrounding tissues. Methods: The authors have developed a compact mechatronic system, as well as an effective method for guiding and controlling the insertion of transperineal needles into the prostate. This system has been designed to allow guidance of a needle obliquely in 3D space into the prostate, thereby reducing pubic arch interference. The choice of needle trajectory and location in the prostate can be adjusted manually or with computer control. Results: To validate the system, a series of experiments were performed on phantoms. The 3D scan of the string phantom produced minimal geometric error, which was less than 0.4 mm. Needle guidance accuracy tests in agar prostate phantoms showed that the mean error of bead placement was less then 1.6 mm along parallel needle paths that were within 1.2 mm of the intended target and 1 deg. from the preplanned trajectory. At oblique angles of up to 15 deg. relative to the probe axis, beads were placed to within 3.0 mm along a trajectory that were within 2.0 mm of the target with an angular error less than 2 deg. Conclusions: By combining 3D TRUS imaging system to a needle tracking linkage, this system should improve the physician's ability to target and accurately guide a needle to selected targets without the need for the computer to directly manipulate and insert the needle. This would be beneficial as the physician has complete control of the system and can safely maneuver the needle guide around obstacles such as previously placed needles.

  4. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  5. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  6. 3D interfractional patient position verification using 2D-3D registration of orthogonal images

    Reproducible positioning of the patient during fractionated external beam radiation therapy is imperative to ensure that the delivered dose distribution matches the planned one. In this paper, we expand on a 2D-3D image registration method to verify a patient's setup in three dimensions (rotations and translations) using orthogonal portal images and megavoltage digitally reconstructed radiographs (MDRRs) derived from CT data. The accuracy of 2D-3D registration was improved by employing additional image preprocessing steps and a parabolic fit to interpolate the parameter space of the cost function utilized for registration. Using a humanoid phantom, precision for registration of three-dimensional translations was found to be better than 0.5 mm (1 s.d.) for any axis when no rotations were present. Three-dimensional rotations about any axis were registered with a precision of better than 0.2 deg. (1 s.d.) when no translations were present. Combined rotations and translations of up to 4 deg. and 15 mm were registered with 0.4 deg. and 0.7 mm accuracy for each axis. The influence of setup translations on registration of rotations and vice versa was also investigated and mostly agrees with a simple geometric model. Additionally, the dependence of registration accuracy on three cost functions, angular spacing between MDRRs, pixel size, and field-of-view, was examined. Best results were achieved by mutual information using 0.5 deg. angular spacing and a 10x10 cm2 field-of-view with 140x140 pixels. Approximating patient motion as rigid transformation, the registration method is applied to two treatment plans and the patients' setup errors are determined. Their magnitude was found to be ≤6.1 mm and ≤2.7 deg. for any axis in all of the six fractions measured for each treatment plan

  7. Automated curved planar reformation of 3D spine images

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  8. DICOM for quantitative imaging research in 3D Slicer

    Fedorov, Andrey; Kikinis, Ron

    2014-01-01

    These are the slides presented by Andrey Fedorov at the 3D Slicer workshop and meeting of the Quantitative Image Informatics for Cancer Research (QIICR) project that took place November 18-19, 2014, at the University of Iowa.

  9. Practical pseudo-3D registration for large tomographic images

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  10. 3D wavefront image formation for NIITEK GPR

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  11. Extracting 3D Layout From a Single Image Using Global Image Structures

    Z. Lou; T. Gevers; N. Hu

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  12. Holoscopic 3D image depth estimation and segmentation techniques

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  13. Efficient reconfigurable architectures for 3D medical image compression

    Afandi, Ahmad

    2010-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In thes...

  14. An automated 3D reconstruction method of UAV images

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  15. Irrlicht 17 Realtime 3D Engine Beginner's Guide

    Stein, Johannes

    2011-01-01

    A beginner's guide with plenty of screenshots and explained code. If you have C++ skills and are interested in learning Irrlicht, this book is for you. Absolutely no knowledge of Irrlicht is necessary for you to follow this book!

  16. 1024 pixels single photon imaging array for 3D ranging

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  17. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  18. 3D acoustic imaging applied to the Baikal neutrino telescope

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  19. 3D acoustic imaging applied to the Baikal neutrino telescope

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  20. Large distance 3D imaging of hidden objects

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  1. 3D Image Reconstruction from Compton camera data

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  2. 3D transrectal ultrasound prostate biopsy using a mechanical imaging and needle-guidance system

    Bax, Jeffrey; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Gil, Elena; Bluvol, Jeremy; Knight, Kerry; Smith, David; Romagnoli, Cesare; Fenster, Aaron

    2008-03-01

    Prostate biopsy procedures are generally limited to 2D transrectal ultrasound (TRUS) imaging for biopsy needle guidance. This limitation results in needle position ambiguity and an insufficient record of biopsy core locations in cases of prostate re-biopsy. We have developed a multi-jointed mechanical device that supports a commercially available TRUS probe with an integrated needle guide for precision prostate biopsy. The device is fixed at the base, allowing the joints to be manually manipulated while fully supporting its weight throughout its full range of motion. Means are provided to track the needle trajectory and display this trajectory on a corresponding TRUS image. This allows the physician to aim the needle-guide at predefined targets within the prostate, providing true 3D navigation. The tracker has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe to generate 3D images. The tracker reduces the variability associated with conventional hand-held probes, while preserving user familiarity and procedural workflow. In a prostate phantom, biopsy needles were guided to within 2 mm of their targets, and the 3D location of the biopsy core was accurate to within 3 mm. The 3D navigation system is validated in the presence of prostate motion in a preliminary patient study.

  3. 3D CT Imaging Method for Measuring Temporal Bone Aeration

    Objective: 3D volume reconstruction of CT images can be used to measure temporal bene aeration. This study evaluates the technique with respect to reproducibility and acquisition parameters. Material and methods: Helical CT images acquired from patients with radiographically normal temporal bones using standard clinical protocols were retrospectively analyzed. 3D image reconstruction was performed to measure the volume of air within the temporal bone. The appropriate threshold values for air were determined from reconstruction of a phantom with a known air volume imaged using the same clinical protocols. The appropriate air threshold values were applied to the clinical material. Results: Air volume was measured according to an acquisition algorithm. The average volume in the temporal bone CT group was 5.56 ml, compared to 5.19 ml in the head CT group (p = 0.59). The correlation coefficient between examiners was > 0.92. There was a wide range of aeration volumes among individual ears (0.76-18.84 ml); however, paired temporal bones differed by an average of just 1.11 ml. Conclusions: The method of volume measurement from 3D reconstruction reported here is widely available, easy to perform and produces consistent results among examiners. Application of the technique to archival CT data is possible using corrections for air segmentation thresholds according to acquisition parameters

  4. Combining different modalities for 3D imaging of biological objects

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  5. Combining Different Modalities for 3D Imaging of Biological Objects

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  6. Subduction zone guided waves: 3D modelling and attenuation effects

    Garth, T.; Rietbrock, A.

    2013-12-01

    Waveform modelling is an important tool for understanding complex seismic structures such as subduction zone waveguides. These structures are often simplified to 2D structures for modelling purposes to reduce computational costs. In the case of subduction zone waveguide affects, 2D models have shown that dispersed arrivals are caused by a low velocity waveguide, inferred to be subducted oceanic crust and/or hydrated outer rise normal faults. However, due to the 2D modelling limitations the inferred seismic properties such as velocity contrast and waveguide thickness are still debated. Here we test these limitations with full 3D waveform modelling. For waveguide effects to be observable the waveform must be accurately modelled to relatively high frequencies (> 2 Hz). This requires a small grid spacing due to the high seismic velocities present in subduction zones. A large area must be modelled as well due to the long propagation distances (400 - 600 km) of waves interacting with subduction zone waveguides. The combination of the large model area and small grid spacing required means that these simulations require a large amount of computational resources, only available at high performance computational centres like the UK National super computer HECTOR (used in this study). To minimize the cost of modelling for such a large area, the width of the model area perpendicular to the subduction trench (the y-direction) is made as small as possible. This reduces the overall volume of the 3D model domain. Therefore the wave field is simulated in a model ';corridor' of the subduction zone velocity structure. This introduces new potential sources of error particularly from grazing wave side reflections in the y-direction. Various dampening methods are explored to reduce these grazing side reflections, including perfectly matched layers (PML) and more traditional exponential dampening layers. Defining a corridor model allows waveguide affects to be modelled up to at least 2

  7. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed met...

  8. 3D Imaging of a Cavity Vacuum under Dissipation

    Lee, Moonjoo; Seo, Wontaek; Hong, Hyun-Gue; Song, Younghoon; Dasari, Ramachandra R; An, Kyungwon

    2013-01-01

    P. A. M. Dirac first introduced zero-point electromagnetic fields in order to explain the origin of atomic spontaneous emission. Since then, it has long been debated how the zero-point vacuum field is affected by dissipation. Here we report 3D imaging of vacuum fluctuations in a high-Q cavity and rms amplitude measurements of the vacuum field. The 3D imaging was done by the position-dependent emission of single atoms, resulting in dissipation-free rms amplitude of 0.97 +- 0.03 V/cm. The actual rms amplitude of the vacuum field at the antinode was independently determined from the onset of single-atom lasing at 0.86 +- 0.08 V/cm. Within our experimental accuracy and precision, the difference was noticeable, but it is not significant enough to disprove zero-point energy conservation.

  9. Automated Recognition of 3D Features in GPIR Images

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  10. 3D printed guides for controlled alignment in biomechanics tests.

    Verstraete, Matthias A; Willemot, Laurent; Van Onsem, Stefaan; Stevens, Cyriëlle; Arnout, Nele; Victor, Jan

    2016-02-01

    The bone-machine interface is a vital first step for biomechanical testing. It remains challenging to restore the original alignment of the specimen with respect to the test setup. To overcome this issue, we developed a methodology based on virtual planning and 3D printing. In this paper, the methodology is outlined and a proof of concept is presented based on a series of cadaveric tests performed on our knee simulator. The tests described in this paper reached an accuracy within 3-4° and 3-4mm with respect to the virtual planning. It is however the authors' belief that the method has the potential to achieve an accuracy within one degree and one millimeter. Therefore, this approach can aid in reducing the imprecisions in biomechanical tests (e.g. knee simulator tests for evaluating knee kinematics) and improve the consistency of the bone-machine interface. PMID:26810696

  11. Improvements in quality and quantification of 3D PET images

    Rapisarda,

    2012-01-01

    The spatial resolution of Positron Emission Tomography is conditioned by several physical factors, which can be taken into account by using a global Point Spread Function (PSF). In this thesis a spatially variant (radially asymmetric) PSF implementation in the image space of a 3D Ordered Subsets Expectation Maximization (OSEM) algorithm is proposed. Two different scanners were considered, without and with Time Of Flight (TOF) capability. The PSF was derived by fitting some experimental...

  12. 3D imaging of semiconductor components by discrete laminography

    Batenburg, Joost; Palenstijn, W.J.; Sijbers, J.

    2014-01-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the ...

  13. 3D VSP imaging in the Deepwater GOM

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  14. Super pipe lining system for 3-D CT imaging

    A new idea for 3-D CT image reconstruction system is introduced. For the network has very important improvement in recently years, it realizes that network computing replace the traditional serial system processing. CT system's works are carried in a multi-level fashion, it will make the tedious works processed by many computers linked by local network in the same time. So greatly improve the reconstruction speed

  15. 3D stereotaxis for epileptic foci through integrating MR imaging with neurological electrophysiology data

    Objective: To improve the accuracy of the epilepsy diagnoses by integrating MR image from PACS with data from neurological electrophysiology. The integration is also very important for transmiting diagnostic information to 3D TPS of radiotherapy. Methods: The electroencephalogram was redisplayed by EEG workstation, while MR image was reconstructed by Brainvoyager software. 3D model of patient brain was built up by combining reconstructed images with electroencephalogram data in Base 2000. 30 epileptic patients (18 males and 12 females) with their age ranged from 12 to 54 years were confirmed by using the integrated MR images and the data from neurological electrophysiology and their 3D stereolocating. Results: The corresponding data in 3D model could show the real situation of patients' brain and visually locate the precise position of the focus. The suddessful rate of 3D guided operation was greatly improved, and the number of epileptic onset was markedly decreased. The epilepsy was stopped for 6 months in 8 of the 30 patients. Conclusion: The integration of MR image and information of neurological electrophysiology can improve the diagnostic level for epilepsy, and it is crucial for imp roving the successful rate of manipulations and the epilepsy analysis. (authors)

  16. Discrete Method of Images for 3D Radio Propagation Modeling

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  17. 3D tongue motion from tagged and cine MR images.

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  18. 3D reconstruction of multiple stained histology images

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  19. Effects of point configuration on the accuracy in 3D reconstruction from biplane images

    Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the 'cloud' of points) on reconstruction errors for one of these techniques developed in our laboratory. Five types of configurations (a ball, an elongated ellipsoid (cigar), flattened ball (pancake), flattened cigar, and a flattened ball with a single distant point) are used in the evaluations. For each shape, 100 random configurations were generated, with point coordinates chosen from Gaussian distributions having a covariance matrix corresponding to the desired shape. The 3D data were projected into the image planes using a known imaging geometry. Gaussian distributed errors were introduced in the x and y coordinates of these projected points. Gaussian distributed errors were also introduced into the gantry information used to calculate the initial imaging geometry. The imaging geometries and 3D positions were iteratively refined using the enhanced-Metz-Fencil technique. The image data were also used to evaluate the feasible R-t solution volume. The 3D errors between the calculated and true positions were determined. The effects of the shape of the configuration, the number of points, the initial geometry error, and the input image error were evaluated. The results for the number of points, initial geometry error, and image error are in agreement with previously reported results, i.e., increasing the number of points and reducing initial geometry and/or image error, improves the accuracy of the reconstructed data. The shape of the 3D configuration of points also affects the error of reconstructed 3D configuration; specifically, errors decrease as the 'volume' of the 3D configuration

  20. Preliminary Investigation: 2D-3D Registration of MR and X-ray Cardiac Images Using Catheter Constraints

    Truong, Michael V.N.; Aslam, Abdullah; Rinaldi, Christopher Aldo; Razavi, Reza; Penney, Graeme P.; Rhode, Kawal

    2009-01-01

    Cardiac catheterization procedures are routinely guided by X-ray fluoroscopy but suffer from poor soft-tissue contrast and a lack of depth information. These procedures often employ pre-operative magnetic resonance or computed tomography imaging for treatment planning due to their excellent soft-tissue contrast and 3D imaging capabilities. We developed a 2D-3D image registration method to consolidate the advantages of both modalities by overlaying the 3D images onto the X-ray. Our method uses...

  1. Automatic structural matching of 3D image data

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  2. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  3. Towards magnetic 3D x-ray imaging

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  4. Thoracic Pedicle Screw Placement Guide Plate Produced by Three-Dimensional (3-D) Laser Printing.

    Chen, Hongliang; Guo, Kaijing; Yang, Huilin; Wu, Dongying; Yuan, Feng

    2016-01-01

    BACKGROUND The aim of this study was to evaluate the accuracy and feasibility of an individualized thoracic pedicle screw placement guide plate produced by 3-D laser printing. MATERIAL AND METHODS Thoracic pedicle samples of 3 adult cadavers were randomly assigned for 3-D CT scans. The 3-D thoracic models were established by using medical Mimics software, and a screw path was designed with scanned data. Then the individualized thoracic pedicle screw placement guide plate models, matched to the backside of thoracic vertebral plates, were produced with a 3-D laser printer. Screws were placed with assistance of a guide plate. Then, the placement was assessed. RESULTS With the data provided by CT scans, 27 individualized guide plates were produced by 3-D printing. There was no significant difference in sex and relevant parameters of left and right sides among individuals (P>0.05). Screws were placed with assistance of guide plates, and all screws were in the correct positions without penetration of pedicles, under direct observation and anatomic evaluation post-operatively. CONCLUSIONS A thoracic pedicle screw placement guide plate can be produced by 3-D printing. With a high accuracy in placement and convenient operation, it provides a new method for accurate placement of thoracic pedicle screws. PMID:27194139

  5. Large Scale 3D Image Reconstruction in Optical Interferometry

    Schutz, Antony; Mary, David; Thiébaut, Eric; Soulez, Ferréol

    2015-01-01

    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

  6. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  7. Autonomous Planetary 3-D Reconstruction From Satellite Images

    Denver, Troelz

    1999-01-01

    A common task for many deep space missions is autonomous generation of 3-D representations of planetary surfaces onboard unmanned spacecrafts. The basic problem for this class of missions is, that the closed loop time is far too long. The closed loop time is defined as the time from when a human...... of seconds to a few minutes, the closed loop time effectively precludes active human control.The only way to circumvent this problem is to build an artificial feature extractor operating autonomously onboard the spacecraft.Different artificial feature extractors are presented and their efficiency...... is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  8. 3D-imaging using micro-PIXE

    Ishii, K.; Matsuyama, S.; Watanabe, Y.; Kawamura, Y.; Yamaguchi, T.; Oyama, R.; Momose, G.; Ishizaki, A.; Yamazaki, H.; Kikuchi, Y.

    2007-02-01

    We have developed a 3D-imaging system using characteristic X-rays produced by proton micro-beam bombardment. The 3D-imaging system consists of a micro-beam and an X-ray CCD camera of 1 mega pixels (Hamamatsu photonics C8800X), and has a spatial resolution of 4 μm by using characteristic Ti-K-X-rays (4.558 keV) produced by 3 MeV protons of beam spot size of ˜1 μm. We applied this system, namely, a micron-CT to observe the inside of a living small ant's head of ˜1 mm diameter. An ant was inserted into a small polyimide tube the inside diameter and the wall thickness of which are 1000 and 25 μm, respectively, and scanned by the micron-CT. Three dimensional images of the ant's heads were obtained with a spatial resolution of 4 μm. It was found that, in accordance with the strong dependence on atomic number of photo ionization cross-sections, the mandibular gland of ant contains heavier elements, and moreover, the CT-image of living ant anaesthetized by chloroform is quite different from that of a dead ant dipped in formalin.

  9. Fully automatic plaque segmentation in 3-D carotid ultrasound images.

    Cheng, Jieyu; Li, He; Xiao, Feng; Fenster, Aaron; Zhang, Xuming; He, Xiaoling; Li, Ling; Ding, Mingyue

    2013-12-01

    Automatic segmentation of the carotid plaques from ultrasound images has been shown to be an important task for monitoring progression and regression of carotid atherosclerosis. Considering the complex structure and heterogeneity of plaques, a fully automatic segmentation method based on media-adventitia and lumen-intima boundary priors is proposed. This method combines image intensity with structure information in both initialization and a level-set evolution process. Algorithm accuracy was examined on the common carotid artery part of 26 3-D carotid ultrasound images (34 plaques ranging in volume from 2.5 to 456 mm(3)) by comparing the results of our algorithm with manual segmentations of two experts. Evaluation results indicated that the algorithm yielded total plaque volume (TPV) differences of -5.3 ± 12.7 and -8.5 ± 13.8 mm(3) and absolute TPV differences of 9.9 ± 9.5 and 11.8 ± 11.1 mm(3). Moreover, high correlation coefficients in generating TPV (0.993 and 0.992) between algorithm results and both sets of manual results were obtained. The automatic method provides a reliable way to segment carotid plaque in 3-D ultrasound images and can be used in clinical practice to estimate plaque measurements for management of carotid atherosclerosis. PMID:24063959

  10. Lymph node imaging by ultrarapid 3D angiography

    Purpose: A report on observations of lymph node images obtained by gadolinium-enhanced 3D MR angiography (MRA). Methods: Ultrarapid MRA (TR, TE, FA - 5 or 6.4 ms, 1.9 or 2.8 ms, 30-40 degrees) with 0.2 mmol/kg BW Gd-DTPA and 20 ml physiological saline. Start after completion of injection. Single series of the pelvis-thigh as well as head-neck regions by use of a phased array coil with a 1.5 T Magnetom Vision or a 1.0 T Magnetom Harmony (Siemens, Erlangen). We report on lymph node imaging in 4 patients, 2 of whom exhibited benign changes and 2 further metastases. In 1 patient with extensive lymph node metastases of a malignant melanoma, color-Doppler sonography as color-flow angiography (CFA) was used as a comparative method. Results: Lymph node imaging by contrast medium-enhanced ultrarapid 3D MRA apparently resulted from their vessels. Thus, arterially-supplied metastases and inflammatory enlarged lymph nodes were well visualized while those with a.v. shunts or poor vascular supply in tumor necroses were poorly imaged. Conclusions: Further investigations are required with regard to the visualization of lymph nodes in other parts of the body as well as a possible differentiation between benign and malignant lesions. (orig.)

  11. Ice shelf melt rates and 3D imaging

    Lewis, Cameron Scott

    Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf's grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3D image of an ice shelf. The 3D images can then be used to obtain a more complete estimate of the bottom melt rates of ice shelves.

  12. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  13. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenge...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....... arise when using data from multiple frequencies for imaging of biological targets. In this paper, the performance of a multi-frequency algorithm, in which measurement data from several different frequencies are used at once, is compared with a stepped-frequency algorithm, in which images reconstructed...

  14. Development of 3D microwave imaging reflectometry in LHD (invited).

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  15. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  16. Effective classification of 3D image data using partitioning methods

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  17. Ultra-realistic 3-D imaging based on colour holography

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  18. Image-Based 3D Face Modeling System

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  19. Extracting 3D layout from a single image using global image structures.

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  20. 3D imaging of neutron tracks using confocal microscopy

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  1. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  2. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  3. Recent progress in 3-D imaging of sea freight containers

    Fuchs, Theobald, E-mail: theobold.fuchs@iis.fraunhofer.de; Schön, Tobias, E-mail: theobold.fuchs@iis.fraunhofer.de; Sukowski, Frank [Fraunhofer Development Center X-ray Technology EZRT, Flugplatzstr. 75, 90768 Fürth (Germany); Dittmann, Jonas; Hanke, Randolf [Chair of X-ray Microscopy, Institute of Physics and Astronomy, Julius-Maximilian-University Würzburg, Josef-Martin-Weg 63, 97074 Würzburg (Germany)

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  4. 3D Reconstruction of virtual colon structures from colonoscopy images.

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  5. 3D electrical tomographic imaging using vertical arrays of electrodes

    Murphy, S. C.; Stanley, S. J.; Rhodes, D.; York, T. A.

    2006-11-01

    Linear arrays of electrodes in conjunction with electrical impedance tomography have been used to spatially interrogate industrial processes that have only limited access for sensor placement. This paper explores the compromises that are to be expected when using a small number of vertically positioned linear arrays to facilitate 3D imaging using electrical tomography. A configuration with three arrays is found to give reasonable results when compared with a 'conventional' arrangement of circumferential electrodes. A single array yields highly localized sensitivity that struggles to image the whole space. Strategies have been tested on a small-scale version of a sludge settling application that is of relevance to the industrial sponsor. A new electrode excitation strategy, referred to here as 'planar cross drive', is found to give superior results to an extended version of the adjacent electrodes technique due to the improved uniformity of the sensitivity across the domain. Recommendations are suggested for parameters to inform the scale-up to industrial vessels.

  6. Mono- and multistatic polarimetric sparse aperture 3D SAR imaging

    DeGraaf, Stuart; Twigg, Charles; Phillips, Louis

    2008-04-01

    SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.

  7. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    Li, X. W.; Kim, D. H.; Cho, S. J.; Kim, S. T.

    2013-01-01

    A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII) and linear-complemented maximum- length cellular automata (LC-MLCA) to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA) recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-...

  8. Fast 3-d tomographic microwave imaging for breast cancer detection.

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  9. Fast 3D subsurface imaging with stepped-frequency GPR

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  10. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Kohei Arai

    2013-06-01

    Full Text Available Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.

  11. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  12. 3D image fusion and guidance for computer-assisted bronchoscopy

    Higgins, W. E.; Rai, L.; Merritt, S. A.; Lu, K.; Linger, N. T.; Yu, K. C.

    2005-11-01

    The standard procedure for diagnosing lung cancer involves two stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.

  13. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  14. Mechanically assisted 3D ultrasound for pre-operative assessment and guiding percutaneous treatment of focal liver tumors

    Sadeghi Neshat, Hamid; Bax, Jeffery; Barker, Kevin; Gardi, Lori; Chedalavada, Jason; Kakani, Nirmal; Fenster, Aaron

    2014-03-01

    Image-guided percutaneous ablation is the standard treatment for focal liver tumors deemed inoperable and is commonly used to maintain eligibility for patients on transplant waitlists. Radiofrequency (RFA), microwave (MWA) and cryoablation technologies are all delivered via one or a number of needle-shaped probes inserted directly into the tumor. Planning is mostly based on contrast CT/MRI. While intra-procedural CT is commonly used to confirm the intended probe placement, 2D ultrasound (US) remains the main, and in some centers the only imaging modality used for needle guidance. Corresponding intraoperative 2D US with planning and other intra-procedural imaging modalities is essential for accurate needle placement. However, identification of matching features of interest among these images is often challenging given the limited field-of-view (FOV) and low quality of 2D US images. We have developed a passive tracking arm with a motorized scan-head and software tools to improve guiding capabilities of conventional US by large FOV 3D US scans that provides more anatomical landmarks that can facilitate registration of US with both planning and intra-procedural images. The tracker arm is used to scan the whole liver with a high geometrical accuracy that facilitates multi-modality landmark based image registration. Software tools are provided to assist with the segmentation of the ablation probes and tumors, find the 2D view that best shows the probe(s) from a 3D US image, and to identify the corresponding image from planning CT scans. In this paper, evaluation results from laboratory testing and a phase 1 clinical trial for planning and guiding RFA and MWA procedures using the developed system will be presented. Early clinical results show a comparable performance to intra-procedural CT that suggests 3D US as a cost-effective alternative with no side-effects in centers where CT is not available.

  15. A beginner's guide to 3D printing 14 simple toy designs to get you started

    Rigsby, Mike

    2014-01-01

    A Beginner''s Guide to 3D Printing is the perfect resource for those who would like to experiment with 3D design and manufacturing, but have little or no technical experience with the standard software. Author Mike Rigsby leads readers step-by-step through 15 simple toy projects, each illustrated with screen caps of Autodesk 123D Design, the most common free 3D software available. The projects are later described using Sketchup, another free popular software package. Beginning with basics projects that will take longer to print than design, readers are then given instruction on more advanced t

  16. 3-D MR imaging of ectopia vasa deferentia

    Goenka, Ajit Harishkumar; Parihar, Mohan; Sharma, Raju; Gupta, Arun Kumar [All India Institute of Medical Sciences (AIIMS), Department of Radiology, New Delhi (India); Bhatnagar, Veereshwar [All India Institute of Medical Sciences (AIIMS), Department of Paediatric Surgery, New Delhi (India)

    2009-11-15

    Ectopia vasa deferentia is a complex anomaly characterized by abnormal termination of the urethral end of the vas deferens into the urinary tract due to an incompletely understood developmental error of the distal Wolffian duct. Associated anomalies of the lower gastrointestinal tract and upper urinary tract are also commonly present due to closely related embryological development. Although around 32 cases have been reported in the literature, the MR appearance of this condition has not been previously described. We report a child with high anorectal malformation who was found to have ectopia vasa deferentia, crossed fused renal ectopia and type II caudal regression syndrome on MR examination. In addition to the salient features of this entity on reconstructed MR images, the important role of 3-D MRI in establishing an unequivocal diagnosis and its potential in facilitating individually tailored management is also highlighted. (orig.)

  17. 3D imaging of semiconductor components by discrete laminography

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach

  18. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  19. 3D imaging of semiconductor components by discrete laminography

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  20. C-arm CT-guided 3D navigation of percutaneous interventions; C-Bogen-CT-unterstuetzte 3D-Navigation perkutaner Interventionen

    Becker, H.C.; Meissner, O.; Waggershauser, T. [Klinikum der Ludwig-Maximilians-Universitaet Muenchen, Campus Grosshadern, Institut fuer Klinische Radiologie, Muenchen (Germany)

    2009-09-15

    So far C-arm CT images were predominantly used for a precise guidance of an endovascular or intra-arterial therapy. A novel combined 3D-navigation C-arm system now also allows cross-sectional and fluoroscopy controlled interventions. Studies have reported about successful CT-image guided navigation with C-arm systems in vertebroplasty. Insertion of the radiofrequency ablation probe is also conceivable for lung and liver tumors that had been labelled with lipiodol. In the future C-arm CT based navigation systems will probably allow simplified and safer complex interventions and simultaneously reduce radiation exposure. (orig.) [German] Bisher wurden CT-Aufnahmen von einem rotierenden C-Bogen-System v. a. fuer die gezielte Unterstuetzung endovaskulaerer und intraarterieller Interventionen verwendet. Mit einer neuen kombinierten 3D-C-Bogen-Navigationseinheit ist es jetzt aber auch moeglich, perkutane Interventionen mit einem C-Bogen-System unter Schichtbild- und fluoroskopischer Kontrolle durchzufuehren. In Studien wird ueber erfolgreiche CT-Bild-gefuehrte Navigationen bei Vertebroplastien mit einem C-Bogen-System berichtet. Vorstellbar ist aber auch das Einbringen von Radiofrequenzsonden in Tumoren von Lunge und Leber, die bereits intraarteriell mit Lipiodol markiert wurden. Voraussichtlich koennen C-Bogen-CT-basierte Navigationssysteme in Zukunft komplexe Interventionen einfacher und sicherer machen und dabei gleichzeitig die Strahlenexposition reduzieren. (orig.)

  1. Interventional spinal procedures guided and controlled by a 3D rotational angiographic unit

    Pedicelli, Alessandro; Verdolotti, Tommaso; Desiderio, Flora; D' Argento, Francesco; Colosimo, Cesare; Bonomo, Lorenzo [Catholic University of Rome, A. Gemelli Hospital, Department of Bioimaging and Radiological Sciences, Rome (Italy); Pompucci, Angelo [Catholic University of Rome, A. Gemelli Hospital, Department of Neurotraumatology, Rome (Italy)

    2011-12-15

    The aim of this paper is to demonstrate the usefulness of 2D multiplanar reformatting images (MPR) obtained from rotational acquisitions with cone-beam computed tomography technology during percutaneous extra-vascular spinal procedures performed in the angiography suite. We used a 3D rotational angiographic unit with a flat panel detector. MPR images were obtained from a rotational acquisition of 8 s (240 images at 30 fps), tube rotation of 180 and after post-processing of 5 s by a local work-station. Multislice CT (MSCT) is the best guidance system for spinal approaches permitting direct tomographic visualization of each spinal structure. Many operators, however, are trained with fluoroscopy, it is less expensive, allows real-time guidance, and in many centers the angiography suite is more frequently available for percutaneous procedures. We present our 6-year experience in fluoroscopy-guided spinal procedures, which were performed under different conditions using MPR images. We illustrate cases of vertebroplasty, epidural injections, selective foraminal nerve root block, facet block, percutaneous treatment of disc herniation and spine biopsy, all performed with the help of MPR images for guidance and control in the event of difficult or anatomically complex access. The integrated use of ''CT-like'' MPR images allows the execution of spinal procedures under fluoroscopy guidance alone in all cases of dorso-lumbar access, with evident limitation of risks and complications, and without need for recourse to MSCT guidance, thus eliminating CT-room time (often bearing high diagnostic charges), and avoiding organizational problems for procedures that need, for example, combined use of a C-arm in the CT room. (orig.)

  2. GPU-accelerated denoising of 3D magnetic resonance images

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  3. Spectral ladar: towards active 3D multispectral imaging

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  4. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    X. W. Li

    2013-08-01

    Full Text Available A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII and linear-complemented maximum- length cellular automata (LC-MLCA to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-MLCA algorithm. When decrypting the encrypted image, the 2-D EIA is recovered by the LC-MLCA. Using the computational integral imaging reconstruction (CIIR technique and a 3-D object is subsequently reconstructed on the output plane from the 2-D recovered EIA. Because the 2-D EIA is composed of a number of elemental images having their own perspectives of a 3-D image, even if the encrypted image is seriously harmed, the 3-D image can be successfully reconstructed only with partial data. To verify the usefulness of the proposed algorithm, we perform computational experiments and present the experimental results for various attacks. The experiments demonstrate that the proposed encryption method is valid and exhibits strong robustness and security.

  5. Development and evaluation of a semiautomatic 3D segmentation technique of the carotid arteries from 3D ultrasound images

    Gill, Jeremy D.; Ladak, Hanif M.; Steinman, David A.; Fenster, Aaron

    1999-05-01

    In this paper, we report on a semi-automatic approach to segmentation of carotid arteries from 3D ultrasound (US) images. Our method uses a deformable model which first is rapidly inflated to approximately find the boundary of the artery, then is further deformed using image-based forces to better localize the boundary. An operator is required to initialize the model by selecting a position in the 3D US image, which is within the carotid vessel. Since the choice of position is user-defined, and therefore arbitrary, there is an inherent variability in the position and shape of the final segmented boundary. We have assessed the performance of our segmentation method by examining the local variability in boundary shape as the initial selected position is varied in a freehand 3D US image of a human carotid bifurcation. Our results indicate that high variability in boundary position occurs in regions where either the segmented boundary is highly curved, or the 3D US image has poorly defined vessel edges.

  6. High resolution 3D imaging of synchrotron generated microbeams

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  7. High resolution 3D imaging of synchrotron generated microbeams

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery

  8. 3D Slicer as an image computing platform for the Quantitative Imaging Network.

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

    2012-11-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  9. Preparing diagnostic 3D images for image registration with planning CT images

    Purpose: Pre-radiotherapy (pre-RT) tomographic images acquired for diagnostic purposes often contain important tumor and/or normal tissue information which is poorly defined or absent in planning CT images. Our two years of clinical experience has shown that computer-assisted 3D registration of pre-RT images with planning CT images often plays an indispensable role in accurate treatment volume definition. Often the only available format of the diagnostic images is film from which the original 3D digital data must be reconstructed. In addition, any digital data, whether reconstructed or not, must be put into a form suitable for incorporation into the treatment planning system. The purpose of this investigation was to identify all problems that must be overcome before this data is suitable for clinical use. Materials and Methods: In the past two years we have 3D-reconstructed 300 diagnostic images from film and digital sources. As a problem was discovered we built a software tool to correct it. In time we collected a large set of such tools and found that they must be applied in a specific order to achieve the correct reconstruction. Finally, a toolkit (ediScan) was built that made all these tools available in the proper manner via a pleasant yet efficient mouse-based user interface. Results: Problems we discovered included different magnifications, shifted display centers, non-parallel image planes, image planes not perpendicular to the long axis of the table-top (shearing), irregularly spaced scans, non contiguous scan volumes, multiple slices per film, different orientations for slice axes (e.g. left-right reversal), slices printed at window settings corresponding to tissues of interest for diagnostic purposes, and printing artifacts. We have learned that the specific steps to correct these problems, in order of application, are: Also, we found that fast feedback and large image capacity (at least 2000 x 2000 12-bit pixels) are essential for practical application

  10. ROIC for gated 3D imaging LADAR receiver

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  11. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  12. Fast 3D T1-weighted brain imaging at 3 Tesla with modified 3D FLASH sequence

    Longitudinal relaxation times (T1) of white and gray matter become close at high magnetic field. Therefore, classical T1 sensitive methods, like spoiled FLASH fail to give a sufficient contrast in human brain imaging at 3 Tesla. An excellent T1 contrast can be achieved at high field by gradient echo imaging with a preparatory inversion pulse. The inversion recovery (IR) preparation can be combined with a fast 2D gradient echo scans. In this paper we present an application of this technique to rapid 3-dimensional imaging. New technique called 3D SIR FLASH was implemented on Burker MSLX system equipped with a 3T, 90 cm horizontal bore magnet working in Centre Hospitalier in Rouffach, France. The new technique was used for comparison of MRI images of healthy volunteers obtained with a traditional 3D imaging. White and gray matter are clearly distinguishable when 3D SIR FLASH is used. The total acquisition time for 128x128x128 image was 5 minutes. Three dimensional visualization with facet representation of surfaces and oblique sections was done off-line on the INDIGO Extreme workstation. New technique is widely used in FORENAP, Centre Hospitalier in Reuffach, Alsace. (author)

  13. Multimodal Registration and Fusion for 3D Thermal Imaging

    Moulay A. Akhloufi; Benjamin Verney

    2015-01-01

    3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind ...

  14. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  15. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  16. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  17. Computational ghost imaging versus imaging laser radar for 3D imaging

    Hardy, Nicholas D

    2012-01-01

    Ghost imaging has been receiving increasing interest for possible use as a remote-sensing system. There has been little comparison, however, between ghost imaging and the imaging laser radars with which it would be competing. Toward that end, this paper presents a performance comparison between a pulsed, computational ghost imager and a pulsed, floodlight-illumination imaging laser radar. Both are considered for range-resolving (3D) imaging of a collection of rough-surfaced objects at standoff ranges in the presence of atmospheric turbulence. Their spatial resolutions and signal-to-noise ratios are evaluated as functions of the system parameters, and these results are used to assess each system's performance trade-offs. Scenarios in which a reflective ghost-imaging system has advantages over a laser radar are identified.

  18. 3D imaging of nanomaterials by discrete tomography

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi2 nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  19. Orthodontic treatment plan changed by 3D images

    Clinical application of CBCT is most often enforced in dental phenomenon of impacted teeth, hyperodontia, transposition, ankyloses or root resorption and other pathologies in the maxillofacial area. The goal, we put ourselves, is to show how the information from 3D images changes the protocol of the orthodontic treatment. The material, we presented six our clinical cases and the change in the plan of the treatment, which has used after analyzing the information carried on the three planes of CBCT. These cases are casuistic in the orthodontic practice and require individual approach to each of them during their analysis and decision taken. The discussion made by us is in line with reveal of the impacted teeth, where we need to evaluate their vertical depth and mediodistal ratios with the bond structures. At patients with hyperodontia, the assessment is of outmost importance to decide which of the teeth to be extracted and which one to be arranged into the dental arch. The conclusion we make is that diagnostic information is essential for decisions about treatment plan. The exact graphs will lead to better treatment plan and more predictable results. (authors) Key words: CBCT. IMPACTED CANINES. HYPERODONTIA. TRANSPOSITION

  20. Investigating the guiding of streamers in nitrogen/oxygen mixtures with 3D simulations

    Teunissen, Jannis; Nijdam, Sander; Takahashi, Eiichi; Ebert, Ute

    2014-10-01

    Recent experiments by S. Nijdam and E. Takahashi have demonstrated that streamers can be guided by weak pre-ionization in nitrogen/oxygen mixtures, as long as there is not too much oxygen (less than 1%). The pre-ionization was created by a laser beam, and was orders of magnitude lower than the density in a streamer channel. Here, we will study the guiding of streamers with 3D numerical simulations. First, we present simulations that can be compared with the experiments and confirm that the laser pre-ionization does not introduce space charge effects by itself. Then we investigate topics as: the conditions under which guiding can occur; how photoionization reduces the guiding at higher oxygen concentrations and whether guided streamers keep their propagation direction outside the pre-ionization. JT was supported by STW Project 10755, SN by the FY2012 Researcher Exchange Program between JSPS and NWO, and ET by JSPS KAKENHI Grant Number 24560249.

  1. Dynamic 3D cell rearrangements guided by a fibronectin matrix underlie somitogenesis.

    Gabriel G Martins

    Full Text Available Somites are transient segments formed in a rostro-caudal progression during vertebrate development. In chick embryos, segmentation of a new pair of somites occurs every 90 minutes and involves a mesenchyme-to-epithelium transition of cells from the presomitic mesoderm. Little is known about the cellular rearrangements involved, and, although it is known that the fibronectin extracellular matrix is required, its actual role remains elusive. Using 3D and 4D imaging of somite formation we discovered that somitogenesis consists of a complex choreography of individual cell movements. Epithelialization starts medially with the formation of a transient epithelium of cuboidal cells, followed by cell elongation and reorganization into a pseudostratified epithelium of spindle-shaped epitheloid cells. Mesenchymal cells are then recruited to this medial epithelium through accretion, a phenomenon that spreads to all sides, except the lateral side of the forming somite, which epithelializes by cell elongation and intercalation. Surprisingly, an important contribution to the somite epithelium also comes from the continuous egression of mesenchymal cells from the core into the epithelium via its apical side. Inhibition of fibronectin matrix assembly first slows down the rate, and then halts somite formation, without affecting pseudopodial activity or cell body movements. Rather, cell elongation, centripetal alignment, N-cadherin polarization and egression are impaired, showing that the fibronectin matrix plays a role in polarizing and guiding the exploratory behavior of somitic cells. To our knowledge, this is the first 4D in vivo recording of a full mesenchyme-to-epithelium transition. This approach brought new insights into this event and highlighted the importance of the extracellular matrix as a guiding cue during morphogenesis.

  2. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    Sturm, Peter; Maybank, Steve

    1999-01-01

    We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  3. Medical image analysis of 3D CT images based on extensions of Haralick texture features

    Tesař, Ludvík; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S.

    2008-01-01

    Roč. 32, č. 6 (2008), s. 513-520. ISSN 0895-6111 R&D Projects: GA AV ČR 1ET101050403; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * Gaussian mixture model * 3D image analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.192, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/tesar-medical image analysis of 3d ct image s based on extensions of haralick texture features.pdf

  4. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  5. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  6. 3-D Reconstruction of Medical Image Using Wavelet Transform and Snake Model

    Jinyong Cheng

    2009-12-01

    Full Text Available Medical image segmentation is an important step in 3-D reconstruction, and 3-D reconstruction from medical images is an important application of computer graphics and biomedicine image processing. An improved image segmentation method which is suitable for 3-D reconstruction is presented in this paper. A 3-D reconstruction algorithm is used to reconstruct the 3-D model from medical images. Rough edge is obtained by multi-scale wavelet transform at first. With the rough edge, improved gradient vector flow snake model is used and the object contour in the image is found. In the experiments, we reconstruct 3-D models of kidney, liver and brain putamen. The performances of the experiments indicate that the new algorithm can produce accurate 3-D reconstruction.

  7. GammaModeler 3-D gamma-ray imaging technology

    The 3-D GammaModelertrademark system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModelertrademark system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders

  8. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  9. Holographic Image Plane Projection Integral 3D Display

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  10. Seeing is saving: the benefit of 3D imaging in gynecologic brachytherapy.

    Viswanathan, Akila N; Erickson, Beth A

    2015-07-01

    Despite a concerning decline in the use of brachytherapy over the past decade, no other therapy is able to deliver a very high dose of radiation into or near a tumor, with a rapid fall-off of dose to adjacent structures. Compared to traditional X-ray-based brachytherapy that relies on points, the use of CT and MR for 3D planning of gynecologic brachytherapy provides a much more accurate volume-based calculation of dose to an image-defined tumor and to the bladder, rectum, sigmoid, and other pelvic organs at risk (OAR) for radiation complications. The publication of standardized guidelines and an online contouring teaching atlas for performing 3D image-based brachytherapy has created a universal platform for communication and training. This has resulted in a uniform approach to using image-guided brachytherapy for treatment and an internationally accepted format for reporting clinical outcomes. Significant improvements in survival and reductions in toxicity have been reported with the addition of image guidance to increase dose to tumor and decrease dose to the critical OAR. Future improvements in individualizing patient treatments should include a more precise definition of the target. This will allow dose modulation based on the amount of residual disease visualized on images obtained at the time of brachytherapy. PMID:25748646